VIRTUAL CONFERENCE
Presentation Material from SERT Virtual Conference 2020
With more and more recipes on the market for how to build and scale your agile software delivery – SaFE, Less, Spotify Model – it’s very hard to know where to start. Often implementing a wholesale solution takes time and leads to disappointing results. I would like to explore a different approach, one that starts with principles and common beliefs and develops towards a set of tools and practices through incremental and experimental approach. Just like you would build your software.
Working from home (WFH) or telework is something that until COVID-19 has been known only as a voluntary and often exceptional practice in the workplace. On one hand, telework is often associated with the perceived increase of productivity and job satisfaction (mostly self-reported by teleworkers), and on the other hand, with a great managerial issue and a loss of control (as reported by the managers). Many managers before and today are skeptical because they question the ability of their staff to handle remote infrastructure, solve any situation independently, manage their time properly or work without supervision. But is telework really a problem? In this talk, we will share our findings from monitoring the company-wide transitions to WFH caused by the COVID-19 in several software companies. Our analysis of commit data, calendar invitations and Slack communication shows that software engineers continue working and their routines and work patterns are not that different from the office times. Distributed work is surely not challenge-free. However, company support evidently results in many engineers not only surviving, but enjoying their experience of WFH. Hear more about the lessons learned from the WFH “experiment” in our talk.
A common denominator of most, if not all, software engineering companies is that they are working team-based, meaning that the team has replaced the individual as the most critical organizational entity. Organizations are ill-equipped to manage this transition and are struggling to find effective ways to develop their teams. We have expertise in team development, and we can offer a solution to an existing and well-known problem.
This presentation will give our experiences of using the Integrated Model of Group Development (IMGD, https://gdqassoc.com/) to improve agile teams’ performance at Volvo and Saab. In our experience, this model appeals to managers as well as development teams.
For the last couple of decades, biometric sensors have boosted the performance of professional athletes. Nowadays, these sensors are available for anyone to monitor and improve their daily lives—including software developers. In this talk, I will show how some common sensors can be used to support software development—from requirements elicitation to source code comprehension—and illustrate use cases in which this area of research can be further developed.
Data-driven product discovery and evolution has become the main driver for business growth and competitive advantage. This talk revisits the fundamentals of requirements engineering from a data-driven perspective and points out the promising avenues for finding the growth potential based on the data that your customers generate. We discuss the main transformational steps towards data-driven requirements engineering and organizational challenges.
Information and communication technologies such as wikis entail several benefits for companies such as reduced time constraints for sharing, common knowledge base, facilitation of localizing, retrieving, and reusing knowledge. However, inefficiencies on the tools’ usage have brought many issues to agile environments including duplicate and outdated information, multiple repositories, unawareness of knowledge source, or incomplete and not valid information. In this talk, I show how these issues could be addressed with efficient knowledge repositories.
Many codebases contain code that is overly complicated, hard to understand and hence expensive to change and evolve. Prioritizing technical debt is a hard problem as modern systems might have millions of lines of code and multiple development teams — no-one has a holistic overview. In addition, there’s always a trade-off between improving existing code versus adding new features so we need to use our time wisely. So what if we could mine the collective intelligence of all contributing programmers and start to make decisions based on information from how the organization actually works with the code?
In this presentation you’ll see how easily obtained version-control data let you uncover the behavior and patterns of the development organization. This language-neutral approach lets you prioritize the parts of your system that benefit the most from improvements, so that you can balance short- and long-term goals guided by data. The specific examples are from real-world codebases like Android, the Linux Kernel, .Net Core Runtime and more. This new perspective on software development will change how you view code.
Who is taking important decisions in your projects? Traditionally, all important strategy, structure, and work-design decisions, as well as most of the ongoing decisions about work procedures have been taken by organizational management. Yet, as Tayloristic habits are disappearing, organizations willingly (or unwillingly) change their decision-making approaches to enable more participation and influence from the performers, which gave rise to participation-based parallel organizational structures, such as quality circles, task forces or communities of practice. The latter is the central topic of this talk. A community of practice (CoP) is usually a group of people with similar skills and interests who share knowledge, make joint decisions, solve problems together, and improve a practice. Communities of practice are cultivated for their potential to influence the knowledge culture and bring value for individuals, teams, projects, and organization as the whole. Despite the assumed benefits, implementing successfully functioning CoPs is a challenge, and even more so in large-scale distributed contexts. In this talk, you will learn what helps to run successful communities of practice, based on the findings from studying member engagement in large-scale distributed communities of practice at Spotify and Ericsson.
In this talk, we will present Scout, a tool where a novel approach to GUI testing is demonstrated, which allows the user to record scriptless test scenarios to a model-based data-structure from which test cases can be replayed either manually or automatically through the tested application’s GUI. The approach takes inspiration from augmented reality and basically renders a head-up display on top of the SUT GUI to provide the tester with testing information. However, in addition to showing previously recorded test scenarios and test data, the tool also uses machine-learning algorithms to find patterns in recorded data to provide the tester with suggestions of test improvements. Thus, merging technologies from record and replay, model-based testing, semi-automated testing and machine-learning into one approach.
Large organizations that develop software-intensive products and services are adopting agile methods for frequent delivery of working software. Such organizations usually have huge projects being executed by large and distributed development settings which require agile methods to be scaled. While scaling, the size and complexity of the organization and the software systems being developed grow, which results in the increased complexity of socio-technical interdependencies. This gives rise to challenges including increased dependencies on other organizational units, and co-evolution of software architecture. This research aims to alleviate the ever-growing complexity of socio-technical interdependencies by developing an innovative way of refactoring architecture and organization in a combination. This will entail the ability to refactor at a “micro” level i.e. considering technical and their corresponding social structures and interdependencies. Based on the assessment of value/waste propositions associated with these interdependencies, this refactoring will ensure retaining or creating those interdependencies that generate value while keeping both the architecture and organization aligned yet simple, despite co-evolution. The outcome will be an innovative method of refactoring with guidelines. The work further also intends to establish a means to continuously measure how the refactoring impact the value proposition for the organization. In this talk, I will discuss how large organizations can scale properly using lean and agile principles, and continuously improve on creating value and removing waste thereby improving ways-of-working.
Insufficient Requirements Engineering (RE) is said to negatively impact subsequent development activities. This is often corroborated by observations in practice where companies only invest little to no effort in, or effort in the wrong aspects of RE, yielding quality deficiencies in the process and the final product. At the same time, RE is a means to an end rather than being a means in itself.
Effort spent on RE needs to be well justified which calls for a method for estimating compromises and flaws in RE, as well as their impact on the project. In this talk, I will present current approaches for evidence-based risk management in RE and outline the vision of how the notion of “good-enough RE” might be used for detecting and managing debt accrued in the RE phase, in turn providing evidence for estimating risk in RE.