SERT Conference on Software Engineering 2024
In this fourth conference in a series of conferences organized by BTH in collaboration with industry partners, we explore next-generation challenges facing companies developing software-intensive systems and products. This two-day event (Wednesday, 2024-11-20, and Thursday, 2024-11-2), hosted by Spotify in Stockholm and organized by BTH, is designed to inspire, educate, and foster collaboration among professionals at the forefront of the industry and academia.
Join us as we explore the future of software engineering and work together to shape the next generation of technology and register now free of charge.
"Have your cake and eat it: Reconciling AI and Privacy in the Hello Magenta digital voice assistant" by Dr. Harald Störrle
Welcome to join in for SERL Sweden Fika talk! We are looking forward to talk from visiting Dr. Harald Störrle, Lead IT consultant a QAware, instructor of the ERMSEI 2023 school.
Thursday 28th SEP 14:00 – 15:00
Join onsite at BTH, J-building, Department of Software Engineering
Join remotely via Zoom: https://bth.zoom.us/j/68341963791?pwd=MzZ6NitOSUR0c1ZrMExNTzQ1ZkE0dz09
Any questions, please contact Anna Eriksson, aes@bth.se
Continuous Engineering and Hybrid Work Conference - September 4-5, Kista
- Opportunities and challenges of adopting continuous engineering
- Low-hanging fruits to benefit from continuous principles
- Feedback loops in engineering
- Compliance in continuous engineering
- Hybrid working: the state-of-practice
- Supporting flexible work policy
- Planning office renovations
- Value of office presence
In case of any questions, please contact eriks.klotins@bth.seand darja.smite@bth.se
Next Generation of Fast, Continuous and Lean Software Development
13th October 9-12:30, at Icon, Växjö or Online via Zoom
Join via Zoom
You have two options to join the conference via Zoom:
- Either by downloading the client: Click on the Zoom link https://bth.zoom.us/j/67596277373. When entering a Zoom meeting for the first time from a computer, you will need to download an application file. The process is easy to complete on all commonly used browsers. Once the application is installed it will launch; before entering the meeting you will be prompted to enter a display name.
- Or by joining zoom without installing the Zoom client: Click on the Zoom link https://bth.zoom.us/j/67596277373, on the navigation page, choose the option at the bottom of the page “join from your browser” – enter a display name, verify that you are not a robot and welcome to join in.
The KKS supported research profile Software Engineering ReThought combines a solid knowledge base in empirical software engineering with multi-vocal co-production to formulate a new research philosophy and start rethinking how we do research to solve emerging software engineering problems of high practical relevanec. Multi-vocal means here that we, as software engineering researchers, use knowledge and expertise from various research disciplines to address specific practical software engineering challenges together with our long-term partners from the relevant industries.
Under the umbrella of this project, the Software Engineering Research Lab at the Blekinge Institute of Technology and its partner Maxkompetens are jointly organising a conference to present visions and groundbreaking results on the Next Generation of Fast, Continuous and Lean Software Development. The goal of this event is to foster lively discussions and exchange on contemporary industrial challenges and ongoing research.
You as a member of the SERT project (rethought.se) are cordially invited to join in – either onsite or remote – for our half-day conference! Naturally, you can bring in colleagues interested in topics around “continuous software engineering”, “technical debt”, or “next generation agile”.
Note that the conference will be run in a hybrid format (both physical and virtual). You select your preference upon registration.
We are looking very much forward to be welcoming you soon!
Program - Full abstract and bios below
- 8:30 – Coffe and mingling
- 9:00 – 09:45 Continuous Everything – Faster or Better or Both – Erik Klotins
- 9:50 – 10:35 Asset Management – Technical Debt Benchmark – Javier Huerta & Ehsan Zabardast
- 10:35 – 11:00 Coffe Break / Fika
- 11:00 – 11:45 Information Diffusion in Software Engineering – on Improving Communication – Michael Dorner
- 11:50 – 12:35 Sensible Automation and Testing – What is the Cost/Benefit – Emil Numminen & Emil Alégroth
- 12:35 – Joint lunch
Continuous Everything – Faster or Better or Both – Erik Klotins
Abstract: Most have heard of CI/CD as an approach to continuous and streamlined software deliveries. However, few have realized that achieving the full potential requires continuous *product planning*, *management*, *design*, *engineering/development*, *integration*, *testing*, and *delivery*. That is, to get the potential the entire company must contribute. How is this done? How and what can be automated? What is the speed needed? What are the benefits, what are the costs? What investments into “continuous” are relevant in a given market and domain setting? To what are the investments of retrofitting existing products with a continuous pipeline? We offer an overview, and present ongoing research into “sensible” and evidence-based automation and continuous activities identifying bottlenecks and how the continuous concept can fit your domain, context and organization.
Dr. Eriks Klotins is working on the cost-benefit perspective on continuous software engineering (CI/CD). His work includes analyzing how to utilize continuous principles throughout the organization best to benefit from faster time-to-market, frequent customer input, and data-driven techniques to fine-tune the product to exact customers’ needs and attain the organizational objectives. Eriks has extensive industry experience in developing software products in fast-paced, dynamic environments. He recently acquired a Ph.D. degree in the area of software engineering practices for start-ups. Currently, Eriks is a post-doctoral researcher at the Blekinge Institute of Technology.
Asset Management – Technical Debt Benchmark – Javier Huerta & Ehsan Zabardast
Abstract: Most talk about technical debt. Many tools exist. But what is it actually? We are reinventing the area focusing on “Assets” that an organization creates and needs to offer their products and services. This includes code, but much more than that. Currently we are focusing on how “hot-spots” and problems can be identified and measured as a part of finding the biggest issues in your code/product/service. We go in and benchmark your product/service, and validate the results with you. You get a free benchmark/evaluation, we get input to improve tools and measurements you can keep and use. For us it is research, for you it is great.
Dr. Javier Gonzalez Huerta is an associate professor in the Software Engineering Department at BTH. He received his PhD in Computer Science from the Universitat Politècnica de Valencia (UPV) in 2014, after working in the industry for about 15 years. Javier’s research focuses on the Asset Management and Technical Debt areas, and he has been doing applied research together with the SERT industrial partners for more than five years.
Ehsan Zabardast is a PhD candidate of Software Engineering in Software Engineering Research Lab Sweden at BTH. He holds a master’s degree in informatics and data science. His main research involves software assets, asset management and degradation, technical debt, and software architecture. His current work includes studying how assets degrade considering other aspects of software development. A major part of his research involves studying technical debt and growing out of the metaphor. He has been involved in research together with the SERT industrial partners including Fortnox, Ericsson, Volvo CE and others for over three years.
Information Diffusion in Software Engineering – on Improving Communication – Michael Dorner
Abstract: Communication is a key pillar in software engineering. All kinds of information is exchanged among individuals (software developers, product owners, testers, data engineers, etc.), teams, locations, or even organizations using different communications tools and platforms. In our research, we aim at understanding how information is spread in software engineering, how we can model it efficiently, and how we can make use of those models to improve the communication. To this end, we are developing robust and reliable models to capture, simulate, evaluate, compare, and predict the diffusion of information in different types of social networks in software engineering. From those models, we derive communication patterns and evaluate their impact on information diffusion.
Michael Dorner is researcher and PhD student at BTH. His research is on exploring and visualizing communication networks as distributed and decentralized knowledge repositories that encode and decode information over time required for and during software development. The information diffusion, the spread of information in such communication networks, reveals communication patterns that enable practitioners to assess the quality of the internal information exchange, identify bottlenecks in the communication, and can improve development processes and organizational structures. Before academia, he was data engineer at Siemens Healthcare developing distributed ML architectures.
Next Generation of Fast, Continuous and Lean Software Development are in collaboration with
PLEng Seminar Day 13th May
Time: Monday 13th May, 9:45 – 18:00 Venue: BTH, Karlskrona, Lövsalen (Library)
Welcome to join PLEng Mini-conference, a full day with presentations of interest to industrial partners and researchers. The program includes seven sessions by PLEng licentiate-candidates where they present their research areas and future approaches, addressing challenges and innovate solutions to specific problems within industry. PLEng Mini-conference offers an opportunity for industry and researchers to meet and discuss applied research in industry.
The event is free for SERL/SERT Partners companies. Register as soon as possible here. For any questions, send an email to Anna Eriksson, aes@bth.se
We are also live-streaming all the presentation during the seminar day for virtual attendees. Join in via https://bth.zoom.us/j/310852993
Welcome!
Best regards,
Prof. Dr. Tony Gorschek
Senior Research Leader SERT…
Program
Coffe/tea and sandwiches are served from 09:45 outside J1610
Welcome and intro talk – Tony Gorschek
The Agile and Lean movements (some would say bandwagons) have shaken the software industry for 18 years now, and have collectively changed a lot of old truths. Or have they? Or are there other, under-researched areas, that contribute to the purported success stories of Agile and Lean projects? I will also propose future study angles, and report some initial results from an ongoing case study.
Context: Continuous integration (CI) is a practice that aims to continuously verify quality aspects of a software intensive system to give fast feedback to the developers, both for functional and non-functional requirements (NFRs). Functional requirements are the direct result of development and can be tested in isolation, utilizing either manual or automated unit,integration or system tests. In contrast, some NFRs are hard to test without functionality as NFRs are often aspects of functionality. This lacking testability attribute makes NFR testing complicated and therefore underrepresented in industrial practice. However, the emergence of CI has radically affected how software intensive systems are developed and has created new avenues for software quality evaluation and quality information acquisition. Research has therefore been devoted to the utilization of this additional information for more efficient and effective NFR verification to preserve resources and time. Objective: We aim to identify the state-of-the-art (SOTA) research utilizing the CI environment for NFR testing, in continuation referred to as CI-NFR testing, and provide a synthesis of open challenges for CI-NFR testing.
Method: We conducted a systematic literature review (SLR). Through rigorous selection, from an initial set of 747 papers, we identified 47 papers that describe how NFRs are tested in a CI environment. Evidence-based analysis, through coding, is performed on the identified papers. Results: First, ten different CI approaches are described by the papers selected for this SLR, each describing different tools, and a total of nine different types of NFRs where reported to be tested. Second, although possible, CI-NFR testing is associated with at least 10 challenges that adversely affect its adoption, use, and maintenance costs. Third, the identified CI-NFR testing processes are tool-driven but currently, there is a lack of NFR testing tools that can be used in the CI environment. Finally, we proposed a CI framework for NFRs testing. Conclusion: A synthesized CI framework is proposed for testing various NFRs, and associated CI tools are also mapped to the components of the framework. This contribution is valuable as results of the study also shows that CI-NFR testing can help improve NFR testing quality in industrial practice. However, results also indicate that CI-NFR testing is currently associated with several challenges that need to be addressed through future research.
12 – 13 Lunch at Bistro J.
The requirements on Authentication, Authorization and Accounting (AAA) for IoT systems are to a large extent context-dependent. The context for IoT systems can encompass a wide spectrum ranging from the 3gpp classication of IoT Systems from “Critical communications IoT systems” (like connected heart pacemaker or a glucometer), to “Enhanced Mobile Broadband” communication or to the “Massive IoT” systems (like connected bulbs). Besides, one IoT system may in itself have varying AAA requirements based on its own context like time of the day, type of activities being performed by the device, geographical location, and power/battery state to name a few. We can, therefore, understand that there are various AAA characteristics and challenges for the IoT systems. There exist a substantial amount of literature pertaining to the AAA characteristics of the IoT systems and their challenges. Hence a Systematic Literature Review (SLR) needs to be performed to research on that. Through this SLR, we shall identify the characteristics and challenges on AAA of the IoT systems. With this we shall also identify the gaps and the need for further research to meet the posed challenges and the various requirements.
Developments in the digital gig-economy, often signified by firms using atypical work arrangements to mediate remunerated work via online labour platforms, has lead to discussions over appropriate regulations in a number of countries. The labour platform debate tangents a broader discussion over labour market resilience in the form of widening income gaps, skill-biased technical change, and a declining labour share of income in OECD-countries. These developments may be explained by declines in trade union density and collective bargaining coverage. Sweden remains an exception with 69 percent union density and non-artificial collective bargaining coverage of 90 percent (2017). In this setting, we have observed labour platforms signing sectoral collective agreements. Here, we investigate the rationale for platform firms to sign collective agreements and conduct a law and economics analysis of these agreements to shed light on how they regulate a number of issues relating to overall labour market resilience.
Originating from the large web players such as Google, Amazon and Facebook, the concept of cloud-native application(CNA)s are spanning across every industry. CNAs quickly and cost effectively adapt to unpredicted changes. Due to the benefits provided by CNAs, Ericsson is in the process of transforming the existing digital services portfolio to CNAs. This transformation is significantly challenging, as the existing products and services, as well as the surrounding environments such as development organizations, delivery pipelines, are built around monolithic applications. In our first investigation, we presented challenges and research directions associated with monitoring and maintaining a large telecom system at Ericsson that was developed with a high degree of legacy application reuse. In the second study, we zoomed into one of the reused applications and investigate its architectural evolution over three releases. The goal of this research is to investigate the quality of system architecture when it evolves over releases. As a quality criterion, we investigated a specific type of technical debt(TD) called architectural technical debt (ADT), by introducing a bug classification framework to identify ADT hints. ADT is a type of debt that is difficult to identify and measure with existing automatic code analysis tools. The bug classification framework aims to bridge this gap of identification by providing a practical classification framework. Once the bugs can be classified into ADT categories, different stakeholders such a Architects, managers can make appropriate decisions on the management of ADTs. On this third study, we are investigating the challenges of migrating an Ericsson subscriber provisioning system to CNAs from multiple perspectives. The study attempts to systematically identify and validate specific impacted areas due to the migration, with already identified cloud-native migration challenges by prior research.
Energy consumption reduction has been an increasing trend in machine learning over the past few years due to its socio-ecological importance. In new challenging areas such as edge computing, energy consumption and predictive accuracy are key variables during algorithm design and implementation. State-of-the-art stream mining algorithms are able to create highly accurate real-time predictions on evolving datasets while adhering to low computational requirements to run in edge devices. This is the case of the Hoeffding Adaptive Tree algorithm. This algorithm achieves high levels of predictive accuracy on evolving datasets by increasing the amount of computations, thus increasing its energy consumption. This paper proposes to extend the Hoeffding Adaptive Tree algorithm to a more energy efficient version, named Green Hoeffding Adaptive Tree (GHAT). GHAT uses a per-node energy growth adaptation approach that has already been implemented and tested in the most novel Hoeffding tree algorithm, outputting promising energy reductions.
Keywords GreenAI · Hoeffding Trees · Data Stream Mining · Energy Efficienc
10 min coffe/stretch legs
Technical Debt (TD) items are manifestations of poor quality introduced by sub-optimal design decisions. Refactoring is a practice that developers perform to improve code quality and increase its maintainability and understandability. There are refactoring operations that aim at improving the code quality by removing specific types of TD items. Other refactoring operations help developers improving code understandability or having more “clean code”. While the motivations behind refactoring operations utilized by developers have been investigated before, there is a lack of empirical evidence retrospectively investigating who are the presumable suspects that are usually refactored (i.e., are big files more prone to being refactored? or are the ones with more TD items?). To fill this gap, we have conducted an empirical study on three open source systems to investigate what matters when it comes to refactoring. We have analyzed 16,150 commits in total, to identify whether refactorings are more likely to happen in files containing more TD items or if they are more likely happening in bigger files. The main result is that size has been found as a significant factor in all the systems under analysis, whilst the number of TD items has not been found as significant in any of the systems.
PLEng Seminar Day 13th May
Time: Monday 13th May, 9:45 – 18:00 Venue: BTH, Karlskrona, Lövsalen (Library)
Welcome to join PLEng Mini-conference, a full day with presentations of interest to industrial partners and researchers. The program includes seven sessions by PLEng licentiate-candidates where they present their research areas and future approaches, addressing challenges and innovate solutions to specific problems within industry. PLEng Mini-conference offers an opportunity for industry and researchers to meet and discuss applied research in industry.
The event is free for SERL/SERT Partners companies. Register as soon as possible here. For any questions, send an email to Anna Eriksson, aes@bth.se
We are also live-streaming all the presentation during the seminar day for virtual attendees. Join in via https://bth.zoom.us/j/310852993
Welcome!
Best regards,
Prof. Dr. Tony Gorschek
Senior Research Leader SERT…
Program
Coffe/tea and sandwiches are served from 09:45 outside J1610
Welcome and intro talk – Tony Gorschek
The Agile and Lean movements (some would say bandwagons) have shaken the software industry for 18 years now, and have collectively changed a lot of old truths. Or have they? Or are there other, under-researched areas, that contribute to the purported success stories of Agile and Lean projects? I will also propose future study angles, and report some initial results from an ongoing case study.
Context: Continuous integration (CI) is a practice that aims to continuously verify quality aspects of a software intensive system to give fast feedback to the developers, both for functional and non-functional requirements (NFRs). Functional requirements are the direct result of development and can be tested in isolation, utilizing either manual or automated unit,integration or system tests. In contrast, some NFRs are hard to test without functionality as NFRs are often aspects of functionality. This lacking testability attribute makes NFR testing complicated and therefore underrepresented in industrial practice. However, the emergence of CI has radically affected how software intensive systems are developed and has created new avenues for software quality evaluation and quality information acquisition. Research has therefore been devoted to the utilization of this additional information for more efficient and effective NFR verification to preserve resources and time. Objective: We aim to identify the state-of-the-art (SOTA) research utilizing the CI environment for NFR testing, in continuation referred to as CI-NFR testing, and provide a synthesis of open challenges for CI-NFR testing.
Method: We conducted a systematic literature review (SLR). Through rigorous selection, from an initial set of 747 papers, we identified 47 papers that describe how NFRs are tested in a CI environment. Evidence-based analysis, through coding, is performed on the identified papers. Results: First, ten different CI approaches are described by the papers selected for this SLR, each describing different tools, and a total of nine different types of NFRs where reported to be tested. Second, although possible, CI-NFR testing is associated with at least 10 challenges that adversely affect its adoption, use, and maintenance costs. Third, the identified CI-NFR testing processes are tool-driven but currently, there is a lack of NFR testing tools that can be used in the CI environment. Finally, we proposed a CI framework for NFRs testing. Conclusion: A synthesized CI framework is proposed for testing various NFRs, and associated CI tools are also mapped to the components of the framework. This contribution is valuable as results of the study also shows that CI-NFR testing can help improve NFR testing quality in industrial practice. However, results also indicate that CI-NFR testing is currently associated with several challenges that need to be addressed through future research.
12 – 13 Lunch at Bistro J.
The requirements on Authentication, Authorization and Accounting (AAA) for IoT systems are to a large extent context-dependent. The context for IoT systems can encompass a wide spectrum ranging from the 3gpp classication of IoT Systems from “Critical communications IoT systems” (like connected heart pacemaker or a glucometer), to “Enhanced Mobile Broadband” communication or to the “Massive IoT” systems (like connected bulbs). Besides, one IoT system may in itself have varying AAA requirements based on its own context like time of the day, type of activities being performed by the device, geographical location, and power/battery state to name a few. We can, therefore, understand that there are various AAA characteristics and challenges for the IoT systems. There exist a substantial amount of literature pertaining to the AAA characteristics of the IoT systems and their challenges. Hence a Systematic Literature Review (SLR) needs to be performed to research on that. Through this SLR, we shall identify the characteristics and challenges on AAA of the IoT systems. With this we shall also identify the gaps and the need for further research to meet the posed challenges and the various requirements.
Developments in the digital gig-economy, often signified by firms using atypical work arrangements to mediate remunerated work via online labour platforms, has lead to discussions over appropriate regulations in a number of countries. The labour platform debate tangents a broader discussion over labour market resilience in the form of widening income gaps, skill-biased technical change, and a declining labour share of income in OECD-countries. These developments may be explained by declines in trade union density and collective bargaining coverage. Sweden remains an exception with 69 percent union density and non-artificial collective bargaining coverage of 90 percent (2017). In this setting, we have observed labour platforms signing sectoral collective agreements. Here, we investigate the rationale for platform firms to sign collective agreements and conduct a law and economics analysis of these agreements to shed light on how they regulate a number of issues relating to overall labour market resilience.
Originating from the large web players such as Google, Amazon and Facebook, the concept of cloud-native application(CNA)s are spanning across every industry. CNAs quickly and cost effectively adapt to unpredicted changes. Due to the benefits provided by CNAs, Ericsson is in the process of transforming the existing digital services portfolio to CNAs. This transformation is significantly challenging, as the existing products and services, as well as the surrounding environments such as development organizations, delivery pipelines, are built around monolithic applications. In our first investigation, we presented challenges and research directions associated with monitoring and maintaining a large telecom system at Ericsson that was developed with a high degree of legacy application reuse. In the second study, we zoomed into one of the reused applications and investigate its architectural evolution over three releases. The goal of this research is to investigate the quality of system architecture when it evolves over releases. As a quality criterion, we investigated a specific type of technical debt(TD) called architectural technical debt (ADT), by introducing a bug classification framework to identify ADT hints. ADT is a type of debt that is difficult to identify and measure with existing automatic code analysis tools. The bug classification framework aims to bridge this gap of identification by providing a practical classification framework. Once the bugs can be classified into ADT categories, different stakeholders such a Architects, managers can make appropriate decisions on the management of ADTs. On this third study, we are investigating the challenges of migrating an Ericsson subscriber provisioning system to CNAs from multiple perspectives. The study attempts to systematically identify and validate specific impacted areas due to the migration, with already identified cloud-native migration challenges by prior research.
Energy consumption reduction has been an increasing trend in machine learning over the past few years due to its socio-ecological importance. In new challenging areas such as edge computing, energy consumption and predictive accuracy are key variables during algorithm design and implementation. State-of-the-art stream mining algorithms are able to create highly accurate real-time predictions on evolving datasets while adhering to low computational requirements to run in edge devices. This is the case of the Hoeffding Adaptive Tree algorithm. This algorithm achieves high levels of predictive accuracy on evolving datasets by increasing the amount of computations, thus increasing its energy consumption. This paper proposes to extend the Hoeffding Adaptive Tree algorithm to a more energy efficient version, named Green Hoeffding Adaptive Tree (GHAT). GHAT uses a per-node energy growth adaptation approach that has already been implemented and tested in the most novel Hoeffding tree algorithm, outputting promising energy reductions.
Keywords GreenAI · Hoeffding Trees · Data Stream Mining · Energy Efficienc
10 min coffe/stretch legs
Technical Debt (TD) items are manifestations of poor quality introduced by sub-optimal design decisions. Refactoring is a practice that developers perform to improve code quality and increase its maintainability and understandability. There are refactoring operations that aim at improving the code quality by removing specific types of TD items. Other refactoring operations help developers improving code understandability or having more “clean code”. While the motivations behind refactoring operations utilized by developers have been investigated before, there is a lack of empirical evidence retrospectively investigating who are the presumable suspects that are usually refactored (i.e., are big files more prone to being refactored? or are the ones with more TD items?). To fill this gap, we have conducted an empirical study on three open source systems to investigate what matters when it comes to refactoring. We have analyzed 16,150 commits in total, to identify whether refactorings are more likely to happen in files containing more TD items or if they are more likely happening in bigger files. The main result is that size has been found as a significant factor in all the systems under analysis, whilst the number of TD items has not been found as significant in any of the systems.
SERT RESEARCH and INDUSTRY KICK-OFF EVENT
Time: Monday 1st of October, 9:00 – 17:00 (..ish) Venue: Telia, Stjärntorget 1, Solna, STOCKHOLM
SERT Research and Industry event offers an extensive full day program of interest to industrial partners to discuss and lift bleeding-edge research and have a great kick-off!
The first part of the conference includes the presentations of SERT’s six sub-projects as they address both specific and overall challenges identified in dialogue with our partners thus far. The second part of the conference aims to an interactive mapping between the sub-projects, industry challenges, and each industrial partner and this will be organised in an open discussion forum with mingling and poster islands where you as a participant can move between areas. It will be an opportunity for researchers/ sub-project leaders to meet with industry representatives to discuss and identify challenges and starting points for the upcoming research and collaboration in SERT.
The event is free for SERT partner companies only (max 5 participants per company). Agenda (subject to change) can be seen below. Full program and other conference information will be sent out early September. In addition to physical presence the initial seminars will also be made available in on-line streaming format for partners only (more information to come). PLEASE take time to register as soon as possible as there are many on the waiting list.
Watch the SERT Kick-off live
We are also live-streaming all the presentations during the event for virtual attendees.
Please help us share the link for live-streaming for all people interested!!
Join via: https://www.bth.se/events/kick-off-sert/
Program
We arrive at Telias great and modern venue around nine so that we have time to register and proceed to the event in time for start at 9.30. Breakfast are served from 07:30 if you drop in early.
Project manager and senior research scientist Prof. Dr. Dr. Gorschek gives an introduction to the research profile with an overview of utilizing multi-vocal research in combination with third generation empirical software engineering to solve tomorrows challenges today!
SP1: Augmented Automated Testing: levering human-machine symbiosis for high-level test automation
The software market has over the last couple of years been spurred on by a need-for-speed that shows no sign of slowing down. This trend has fostered a culture of test automation since manual testing has been unable to scale with the size and speed of modern software development practices. Further, automation is requested on all levels of system abstraction, from small unit tests of individual software components to large-scale end-to-end system tests on a GUI level of abstraction.
However, traditional testing, manual as well as automated, has been reliant on human users to define the test scenarios. Hence, acting in a parasitic manner that forces the user to define tests, how/when to run the tests and to analyze test results to act as final oracles. In fact, some test-purposes cannot even be fully automated, like test exploration, due to lacking test oracles. Simply put, a cognitive human being is required today to identify correct and incorrect system behavior. However, what if we could change this dynamic?
In this research, the ultimate goal is to find ways to leverage the cognitive power of users to explore and find defects, faults and tests, whilst allowing machines to perform the repetitive and boring tasks that make them error-prone when executed by humans. This will be achieved by utilizing advances in machine learning and artificial intelligence (AI) to foster mutualistic (from mutualism) collaboration between tool and user rather than the parasitic relationship. Mutualism will enable new and smarter tools to learn from the user, process the learnings and provide the user with feedback to improve the user’s capabilities. These improvements would, through reinforcement learning, make the tool even smarter and more capable, which in turn positively affects the tool’s capability to guide the user, creating a positive feedback loop that foster joint, mutualistic, improvement of both user and tool.
However, with this new technology comes many new challenges, questions, and concerns, such as:
- How do we construct a system with these characteristics?
- How do we efficiently train such a system?
- How is user trust affected when the system fails and how can it be reacquired?
- How do we maintain such a system?
- Where in the continuous pipeline does such a system fit to optimize its value?
SP2: Heterogeneous multi-source requirements engineering
Companies are currently exposed to large amounts of heterogeneous data originating from business intelligence, product usage data, reviews and other forms of feedback. This challenges requirements identification and concretization and creates demands for revisiting requirements management activities. A growing trend is also that substantial amount of this data is generated by machine-learning components integrated into software products that are self-adaptive (e.g. systems with deep learning algorithms). This means that these software products not only continuously provide data about the changing environment, but also self-adapt and change their behaviour based on contextual fluctuations (so called non-deterministic behaviour).This sub-project focuses on how to support the inception, realization and evolution phases of software systems development by efficient data acquisition and analysis approaches and machine learning.
In this talk, we will revisit requirements engineering activities and focus on how we can transform them to be more data-intensive and better support of:
i). data collection and problem formulation (intelligence, identifying relevant data sources, filtering relevant information from non-relevant) – requirements analysts can not analyze all available data, therefore intelligent filtering and triage is needed to support requirements screening and early removal of irrelevant information
ii) development of requirements realization alternatives (prioritizing these opinions and presenting them for decision-makers ) – requirements prioritization can be data driven and complement expert opinions by using analogy and product usage data
iii) evaluation of these alternatives (semi-automated analysis of product usage data and user feedback) where product usage data helps to understand the consequence and model projected customer response to new functionality.
SP3: Value-Oriented Strategy to Detect and Minimize Waste
Software companies are immersed in a competitive market in which changes need to be done under aggressive deadlines, and that sometimes are by themselves companies’ competitive advantage. This time pressure might force companies to make ineffective use of resources, by generating waste or overhead e.g., investing time and money in activities that do not produce any value. Examples of those can be investing on analysis and prototyping of requirements that would never be included in a product, or barely good enough decisions that might have a great impact on several areas of the product development. The consequences are severe and include: lower efficiency of requirements processing and decision making, code and architectural erosion, or sub-optimal usage of testing resources.
The goal of this research is to be able to identify and mitigate the different types of waste and overhead on the different stages of the software development process, to allow organizations to focus on value creation. Most of the work has focused on the development and maintenance activities, but we still need to have a broader view of the problem, including waste in requirements and testing unexplored.
In this talk we will introduce and illustrate with examples the different types of waste and overhead in the inception, realization and maintenance stages of the development process. Some of these types have a clear impact on the waste like the requirements prioritization problem, to choose the “right” features in the inception phase. The problem is that some types of overhead can be mistaken for waste, like intra and inter-team communication, and when minimized then even more waste is introduced (lack of understanding due to lack of communication).
The concerns that remain open are:
How can we identify waste? What is overhead? How can we avoid these to focus on value creation?
All these questions are covered in the SERT subproject “Value-Oriented Strategy to Detect and Minimize Waste”
Lunch served at the venue
SP4: Cognitive software engineering development models
SP5: Study and Improve LeaGile handling of organizational and team interfaces
Staying competitive in today’s software market requires a high degree of flexibility and ability to adapt to changing market conditions. Agile and Lean software development have been widely adopted as solutions for reducing needless work and increasing flexibility. Open Source approaches have revolutionised software development through the use of open collaboration platforms with advanced tools and methods for code sharing and communication. However, these methods have issues with scaling and do not inform companies on how they can deal with cognitive and organisational limitations that prevent high work performance and flexibility to be realised in practice.
Traditionally, software development processes are seen as pipelines that transform inputs to outputs – for example, requirements become specifications, and specifications become code. Part of what moves in the process is software architecture in the traditional sense – high-level blueprints that specify how a software system is structured and how the software components fit together. The process steps and the components of the architecture can be mapped to units in the software organisation. We know that this relationship is an important factor that affects communication and coordination in the organisation. We also know that it influences cognitive load, social interactions, and motivation among individual software developers. A well designed system of processes, architectures, and organisational and team interfaces translate to higher performance and flexibility and can better support developers’ strengths and help them overcome their weaknesses.
In this talk, we illustrate why processes, architectures, and organisational and team interfaces form a crucial system of work design in software development and how they are related to developers’ cognition, feelings, and motivation. We give examples of how improvements in the work design have led to increased performance and flexibility, and discuss what could be possible in terms of creating intelligent automation that makes processes and architectures interactive rather than static pipelines and blueprints.
These topics are covered in the SERT sub-projects “Cognitive software engineering development models” and “Study and Improve LeaGile handling of organizational and team interfaces”.
SP6: Verification of Software Requirements in Dynamic, Complex and Regulated Markets
Software engineering is a data and people intensive activity. Data is accumulated, analyzed and transformed in order to drive activities such as requirements analysis, software design and implementation, testing and long-term maintenance. While task specialization and processes help to cope with the demands of the growing complexity of today’s software products, there can be gained a lot by supplementing human intelligence with computational intelligence. In the past years, we have identified, studied and analyzed human software engineering processes and designed support systems that help engineers to perform their tasks more effectively and efficiently.
One example is the interactive support for writing requirements specifications that indicates adherence to certain quality rules. Identifying defects in requirements already when they are written reduces reviewing costs and allows to free up resources to verify quality aspects for which humans are still the best judges.
Another example is the semi-automated identification of domain-specific synonyms. In large organizations that collaborate with external suppliers and customers, agreeing to a common terminology is often tedious and cost-intensive. Creating a common glossary can reduce ambiguity and misunderstandings internally but also when interfacing with external partners.
In this talk we will illustrate practical examples of how human and computational intelligence can be combined to improve software engineering activities. In addition we give an outlook how these technologies could be utilized to check conformance to external requirements.
After getting an introduction to the base sub-projects and areas each sub-project team will be available for discussions, questions and details. This session is CRITICAL as initial meet-and-greet as well as discussions gives the opportunity for new ideas and initial bookings of meetings and workshops. We really hope you will active and contribute!
This session is short but gives an overview of next steps planned in the SERT research profile, but more importantly gives YOU the opportunity to ask questions and share ideas. The session lasts as long as we need it to!…
SERT Virtual Summer Conference
Time: Sep 16-17 2020, 8:45-15:15 Virtual via Zoom and YouTube
Options for joining:
- Zoom: https://bth.zoom.us/j/62888864541 (either via client or browser application)
- Youtube: https://youtu.be/zX7mNYbwdQw (Day 1) and https://youtu.be/6miq7GvPW-A (Day 2)
Visit the conference page for more details, e.g. on how to join the conference.
Day 1: Wednesday, 16th of September 2020
With more and more recipes on the market for how to build and scale your agile software delivery – SaFE, Less, Spotify Model – it’s very hard to know where to start. Often implementing a wholesale solution takes time and leads to disappointing results. I would like to explore a different approach, one that starts with principles and common beliefs and develops towards a set of tools and practices through incremental and experimental approach. Just like you would build your software.
Working from home (WFH) or telework is something that until COVID-19 has been known only as a voluntary and often exceptional practice in the workplace. On one hand, telework is often associated with the perceived increase of productivity and job satisfaction (mostly self-reported by teleworkers), and on the other hand, with a great managerial issue and a loss of control (as reported by the managers). Many managers before and today are skeptical because they question the ability of their staff to handle remote infrastructure, solve any situation independently, manage their time properly or work without supervision. But is telework really a problem? In this talk, we will share our findings from monitoring the company-wide transitions to WFH caused by the COVID-19 in several software companies. Our analysis of commit data, calendar invitations and Slack communication shows that software engineers continue working and their routines and work patterns are not that different from the office times. Distributed work is surely not challenge-free. However, company support evidently results in many engineers not only surviving, but enjoying their experience of WFH. Hear more about the lessons learned from the WFH “experiment” in our talk.
A common denominator of most, if not all, software engineering companies is that they are working team-based, meaning that the team has replaced the individual as the most critical organizational entity. Organizations are ill-equipped to manage this transition and are struggling to find effective ways to develop their teams. We have expertise in team development, and we can offer a solution to an existing and well-known problem.
This presentation will give our experiences of using the Integrated Model of Group Development (IMGD, https://gdqassoc.com/) to improve agile teams’ performance at Volvo and Saab. In our experience, this model appeals to managers as well as development teams.
During the first lunch discussion (note: eating & talking is allowed), we would like to raise the topic of teamwork in the times of social distancing caused by coronavirus pandemic and beyond, foreseeing the increased popularity of working from home in the coming years. We will talk about what team members can do not to lose touch and what practices and rituals help to socialize, make joint decision and facilitate ad hoc conversations and queries. Anybody with experience and personal opinions is welcome to join!
For the last couple of decades, biometric sensors have boosted the performance of professional athletes. Nowadays, these sensors are available for anyone to monitor and improve their daily lives—including software developers. In this talk, I will show how some common sensors can be used to support software development—from requirements elicitation to source code comprehension—and illustrate use cases in which this area of research can be further developed.
Data-driven product discovery and evolution has become the main driver for business growth and competitive advantage. This talk revisits the fundamentals of requirements engineering from a data-driven perspective and points out the promising avenues for finding the growth potential based on the data that your customers generate. We discuss the main transformational steps towards data-driven requirements engineering and organizational challenges.
Information and communication technologies such as wikis entail several benefits for companies such as reduced time constraints for sharing, common knowledge base, facilitation of localizing, retrieving, and reusing knowledge. However, inefficiencies on the tools’ usage have brought many issues to agile environments including duplicate and outdated information, multiple repositories, unawareness of knowledge source, or incomplete and not valid information. In this talk, I show how these issues could be addressed with efficient knowledge repositories.
Day 2: Thursday, 17th of September 2020
Many codebases contain code that is overly complicated, hard to understand and hence expensive to change and evolve. Prioritizing technical debt is a hard problem as modern systems might have millions of lines of code and multiple development teams — no-one has a holistic overview. In addition, there’s always a trade-off between improving existing code versus adding new features so we need to use our time wisely. So what if we could mine the collective intelligence of all contributing programmers and start to make decisions based on information from how the organization actually works with the code?
In this presentation you’ll see how easily obtained version-control data let you uncover the behavior and patterns of the development organization. This language-neutral approach lets you prioritize the parts of your system that benefit the most from improvements, so that you can balance short- and long-term goals guided by data. The specific examples are from real-world codebases like Android, the Linux Kernel, .Net Core Runtime and more. This new perspective on software development will change how you view code.
Who is taking important decisions in your projects? Traditionally, all important strategy, structure, and work-design decisions, as well as most of the ongoing decisions about work procedures have been taken by organizational management. Yet, as Tayloristic habits are disappearing, organizations willingly (or unwillingly) change their decision-making approaches to enable more participation and influence from the performers, which gave rise to participation-based parallel organizational structures, such as quality circles, task forces or communities of practice. The latter is the central topic of this talk. A community of practice (CoP) is usually a group of people with similar skills and interests who share knowledge, make joint decisions, solve problems together, and improve a practice. Communities of practice are cultivated for their potential to influence the knowledge culture and bring value for individuals, teams, projects, and organization as the whole. Despite the assumed benefits, implementing successfully functioning CoPs is a challenge, and even more so in large-scale distributed contexts. In this talk, you will learn what helps to run successful communities of practice, based on the findings from studying member engagement in large-scale distributed communities of practice at Spotify and Ericsson.
In this talk, we will present Scout, a tool where a novel approach to GUI testing is demonstrated, which allows the user to record scriptless test scenarios to a model-based data-structure from which test cases can be replayed either manually or automatically through the tested application’s GUI. The approach takes inspiration from augmented reality and basically renders a head-up display on top of the SUT GUI to provide the tester with testing information. However, in addition to showing previously recorded test scenarios and test data, the tool also uses machine-learning algorithms to find patterns in recorded data to provide the tester with suggestions of test improvements. Thus, merging technologies from record and replay, model-based testing, semi-automated testing and machine-learning into one approach.
During the lunch discussion (note: eating and talking allowed), we will discuss the topic of quality assurance, how it is performed today and how it may look in the future. We will discuss current ways of working, as well as what new approaches might be on the horizon given the advances of machine learning and AI based technologies. Anybody with experience and personal opinions is welcome to join!
Insufficient Requirements Engineering (RE) is said to negatively impact subsequent development activities. This is often corroborated by observations in practice where companies only invest little to no effort in, or effort in the wrong aspects of RE, yielding quality deficiencies in the process and the final product. At the same time, RE is a means to an end rather than being a means in itself.
Effort spent on RE needs to be well justified which calls for a method for estimating compromises and flaws in RE, as well as their impact on the project. In this talk, I will present current approaches for evidence-based risk management in RE and outline the vision of how the notion of “good-enough RE” might be used for detecting and managing debt accrued in the RE phase, in turn providing evidence for estimating risk in RE.