Presentation materials from PLEng Seminar Day 13th May 2019
The Agile and Lean movements (some would say bandwagons) have shaken the software industry for 18 years now, and have collectively changed a lot of old truths. Or have they? Or are there other, under-researched areas, that contribute to the purported success stories of Agile and Lean projects? I will also propose future study angles, and report some initial results from an ongoing case study.
Context: Continuous integration (CI) is a practice that aims to continuously verify quality aspects of a software intensive system to give fast feedback to the developers, both for functional and non-functional requirements (NFRs). Functional requirements are the direct result of development and can be tested in isolation, utilizing either manual or automated unit,integration or system tests. In contrast, some NFRs are hard to test without functionality as NFRs are often aspects of functionality. This lacking testability attribute makes NFR testing complicated and therefore underrepresented in industrial practice. However, the emergence of CI has radically affected how software intensive systems are developed and has created new avenues for software quality evaluation and quality information acquisition. Research has therefore been devoted to the utilization of this additional information for more efficient and effective NFR verification to preserve resources and time. Objective: We aim to identify the state-of-the-art (SOTA) research utilizing the CI environment for NFR testing, in continuation referred to as CI-NFR testing, and provide a synthesis of open challenges for CI-NFR testing.
Method: We conducted a systematic literature review (SLR). Through rigorous selection, from an initial set of 747 papers, we identified 47 papers that describe how NFRs are tested in a CI environment. Evidence-based analysis, through coding, is performed on the identified papers. Results: First, ten different CI approaches are described by the papers selected for this SLR, each describing different tools, and a total of nine different types of NFRs where reported to be tested. Second, although possible, CI-NFR testing is associated with at least 10 challenges that adversely affect its adoption, use, and maintenance costs. Third, the identified CI-NFR testing processes are tool-driven but currently, there is a lack of NFR testing tools that can be used in the CI environment. Finally, we proposed a CI framework for NFRs testing. Conclusion: A synthesized CI framework is proposed for testing various NFRs, and associated CI tools are also mapped to the components of the framework. This contribution is valuable as results of the study also shows that CI-NFR testing can help improve NFR testing quality in industrial practice. However, results also indicate that CI-NFR testing is currently associated with several challenges that need to be addressed through future research.
The requirements on Authentication, Authorization and Accounting (AAA) for IoT systems are to a large extent context-dependent. The context for IoT systems can encompass a wide spectrum ranging from the 3gpp classication of IoT Systems from “Critical communications IoT systems” (like connected heart pacemaker or a glucometer), to “Enhanced Mobile Broadband” communication or to the “Massive IoT” systems (like connected bulbs). Besides, one IoT system may in itself have varying AAA requirements based on its own context like time of the day, type of activities being performed by the device, geographical location, and power/battery state to name a few. We can, therefore, understand that there are various AAA characteristics and challenges for the IoT systems. There exist a substantial amount of literature pertaining to the AAA characteristics of the IoT systems and their challenges. Hence a Systematic Literature Review (SLR) needs to be performed to research on that. Through this SLR, we shall identify the characteristics and challenges on AAA of the IoT systems. With this we shall also identify the gaps and the need for further research to meet the posed challenges and the various requirements.
Developments in the digital gig-economy, often signified by firms using atypical work arrangements to mediate remunerated work via online labour platforms, has lead to discussions over appropriate regulations in a number of countries. The labour platform debate tangents a broader discussion over labour market resilience in the form of widening income gaps, skill-biased technical change, and a declining labour share of income in OECD-countries. These developments may be explained by declines in trade union density and collective bargaining coverage. Sweden remains an exception with 69 percent union density and non-artificial collective bargaining coverage of 90 percent (2017). In this setting, we have observed labour platforms signing sectoral collective agreements. Here, we investigate the rationale for platform firms to sign collective agreements and conduct a law and economics analysis of these agreements to shed light on how they regulate a number of issues relating to overall labour market resilience.
Originating from the large web players such as Google, Amazon and Facebook, the concept of cloud-native application(CNA)s are spanning across every industry. CNAs quickly and cost effectively adapt to unpredicted changes. Due to the benefits provided by CNAs, Ericsson is in the process of transforming the existing digital services portfolio to CNAs. This transformation is significantly challenging, as the existing products and services, as well as the surrounding environments such as development organizations, delivery pipelines, are built around monolithic applications. In our first investigation, we presented challenges and research directions associated with monitoring and maintaining a large telecom system at Ericsson that was developed with a high degree of legacy application reuse. In the second study, we zoomed into one of the reused applications and investigate its architectural evolution over three releases. The goal of this research is to investigate the quality of system architecture when it evolves over releases. As a quality criterion, we investigated a specific type of technical debt(TD) called architectural technical debt (ADT), by introducing a bug classification framework to identify ADT hints. ADT is a type of debt that is difficult to identify and measure with existing automatic code analysis tools. The bug classification framework aims to bridge this gap of identification by providing a practical classification framework. Once the bugs can be classified into ADT categories, different stakeholders such a Architects, managers can make appropriate decisions on the management of ADTs. On this third study, we are investigating the challenges of migrating an Ericsson subscriber provisioning system to CNAs from multiple perspectives. The study attempts to systematically identify and validate specific impacted areas due to the migration, with already identified cloud-native migration challenges by prior research.
Energy consumption reduction has been an increasing trend in machine learning over the past few years due to its socio-ecological importance. In new challenging areas such as edge computing, energy consumption and predictive accuracy are key variables during algorithm design and implementation. State-of-the-art stream mining algorithms are able to create highly accurate real-time predictions on evolving datasets while adhering to low computational requirements to run in edge devices. This is the case of the Hoeffding Adaptive Tree algorithm. This algorithm achieves high levels of predictive accuracy on evolving datasets by increasing the amount of computations, thus increasing its energy consumption. This paper proposes to extend the Hoeffding Adaptive Tree algorithm to a more energy efficient version, named Green Hoeffding Adaptive Tree (GHAT). GHAT uses a per-node energy growth adaptation approach that has already been implemented and tested in the most novel Hoeffding tree algorithm, outputting promising energy reductions.
Keywords GreenAI · Hoeffding Trees · Data Stream Mining · Energy Efficienc
Technical Debt (TD) items are manifestations of poor quality introduced by sub-optimal design decisions. Refactoring is a practice that developers perform to improve code quality and increase its maintainability and understandability. There are refactoring operations that aim at improving the code quality by removing specific types of TD items. Other refactoring operations help developers improving code understandability or having more “clean code”. While the motivations behind refactoring operations utilized by developers have been investigated before, there is a lack of empirical evidence retrospectively investigating who are the presumable suspects that are usually refactored (i.e., are big files more prone to being refactored? or are the ones with more TD items?). To fill this gap, we have conducted an empirical study on three open source systems to investigate what matters when it comes to refactoring. We have analyzed 16,150 commits in total, to identify whether refactorings are more likely to happen in files containing more TD items or if they are more likely happening in bigger files. The main result is that size has been found as a significant factor in all the systems under analysis, whilst the number of TD items has not been found as significant in any of the systems.