Systems Engineering for Software Engineers

A blog for the COMP3530 learning portfolio

User Interfaces

This week marked the last formal panel session, and thus this is my final entry into this portfolio. In this entry I want talk about something that has come up again and again through my portfolio entries, the human component of systems. At the surface level it seems a trivially obvious statement to say that systems engineering is a people driven affair; systems are built by people to service the needs of people. I want to go deeper than this though. During the big picture panel Geoff Patch of CEA was asked whether his radar software development team, pays much attention to interfacing with the user of the system. He responded that no, they didn’t, because their system simply connected into a larger system, responsible for displaying data to the user.

This is ostensibly true, his system doesn’t directly display information to the user, but it still interfaces with the user. No, it isn’t responsible for drawing the lines on the screen, but the information it provides guides a users decision, it influences how the user reacts to a situation. To take a bold step and generalise this, any part of a system always has some interface to some user. The link might be tenuous, but it’s there. Looking at writing code, the code itself interfaces with the user by the manner in which it is written; the comments, the names of variables. The user the code interfaces with, i.e. those who write and maintain it, is different from the user the program interfaces with, but they still interface with a user.

These interfaces drive so many of the adjunct concerns with systems engineering. Sustainability is focused on ensuring that the systems interface with future users is a non-destructive one. Ensuring a systems safety requires safeguarding the user interfaces. It requires understanding where those interfaces are and what risks are potentially associated with them. Some interfaces and their risks are more obvious than others, a planes interface to it’s passengers is much more direct than a nuclear power plants interface with future inhabitants of earth, but they must both be considered carefully.

These interfaces have constraints, they have limits imposed implicitly or explicitly by the users on the other side. Abstraction, for example, is a response to a constraint placed on this interface by human attention spectrum limitations. Communication between professionals of differing domains, as well as between technical and lay people, must accommodate the knowledge and experience restrictions these communication interfaces present. Exploring and better understanding the user interface our road system presents, has led to radical rethinks about how we can signpost those roads to better communicate across that interface to users.

The consideration of this interface and how the system, or component of the system, interacts with it does not except us from dealing with the cardinal portion of the system. Geoff’s radar must still detect objects in the sky, but to say that this system or section of a system, is devoid of a user interface is to miss the bigger picture.

Communicating Technically

During this weeks panel, we were all a part of Chris Browne’s study into why people don’t understand climate change, in particular, the stock and flow aspect of carbon and the atmosphere. His underlying hypothesis was that, in general, lay people (along with a higher than optimum percentage of engineers) have a poor grasp of system dynamics. He theorises that it is this deficiency that causes large portions of the population to fail to realise that we cannot just slow the rate of carbon we release into the atmosphere, we have to stop it. There are a lot more specifics to his work, which I won’t go into here. His overall argument, however, seemed to be that people just don’t get stocks and flows. In particular they struggle to correctly correlate graphical depictions of the flow of stocks to the overall behaviour of the system.

So whats new? Can we genuinely be surprised that your average person lacks strong technical knowledge? This isn’t exactly an earth shattering revelation. Yes it’s unfortunate that these particular people are politicians with global influence. Yes it’s unfortunate that their failure to correctly interpret rates of change may have irreversible, potentially catastrophic impacts. But we should not be surprised. Theres a long history of people in power making decisions on poor or incomplete technical understandings. These range from the disappointing, such as the mid-way cancellation of Labor’s National Broadband Network1, to the absurd, like the Indiana Pi Bill; a proposition to change the value of pi to 3.22.

Systems engineering involves the contribution large amounts of people, technically and non technically trained. Even amongst those with technical inclinations, it is unrealistic to expect them to all posses the same level of proficiency across technical domains. Effectively bridging these divisions of knowledge and understanding is critical to functional communication. I believe the graphical example that Chris used displayed a common error made by those attempting to communicate technically; it eschewed established norms.

His example showed the rate at which we were putting carbon into the atmosphere, rather than the total amount of carbon in the atmosphere. When asked how to stabilise the total amount of carbon in the atmosphere, a lay person falls back to the established norm whereby a flat line on a graph is stable. They don’t make the connection between rates of change and totals, they make the connection to established graphical norms. I actually spent quite some time trying to find a graph aimed at a non technical audience that used a rate of change, rather than a total on it’s axes, and I actually couldn’t; that’s how ingrained this norm is.

As the breadth and depth of human knowledge expands, these types are issues set to play an even larger role in systems engineering. This expansion cements the need to understand potential communication pitfalls, such as this one.

References and Further Reading

1: NBN Comparison

2: Indiana Pi Bill

Sutainable Consumerism

In this weeks panel session, Dr Lorrae Van Kerkhoff attempted to get us as software engineers, to think about the intersection of technology and sustainability. Her first question to the audience was about how we perceived sustainability, what did we think about sustainable practices? How important to us were they? A colleague next to me, jovially remarked that he “didn’t give a s**t”, a harsh sentiment to be sure, but one I echoed – albeit slightly more eloquently. It is difficult to be strongly motivated by something that has hard to perceive or absent immediate consequences. Dr Van Kerkhoff overheard our exchange, far from taken aback, she indicated that by the end of the presentation she hoped to change our thinking – she failed. Her failing extends past convincing undergraduates, and typifies a larger issue sustainability advocates face, getting the general populace to care.

Don’t get me wrong, people care about sustainable practices, or, more accurately, people say they care. A 2011 survey indicates approximately two thirds of North Americans view sustainability as “Extremely or Critically Important” for “Making the world a better place”1. Seems like most people are on board the sustainability train! So why then have only 5% of Americans actively followed recommended green actions such as driving less or reducing utility usage?2 During her presentation, Dr Van Kerkhoff talked about the Rio + 20 United Nations global sustainability conference. She flashed up several lovely looking graphical representations of exactly how broken the world currently is, and how much worse it’s going to get. Absent however, was any mention of concrete sustainability practises that this conference, or any others, had implemented with any effect.

It’s a common criticism levelled at these type of globals summits; that they are more talk than action, some German commentators referring to the Rio + 20 gathering as the “summit of futility”3. How can we expect the lay person to act any differently when the example those at the top set is so poor? Some would argue that no matter the example set, it is simply too much to ask the ordinary person to adopt sustainable practices. They would argue that unless faced with impending doom, or an immediate wide scale disaster, people just aren’t receptive to changing their unsustainable ways.

I firmly believe the underlying issue preventing wide scale sustainable practice adoption amongst lay people, is that sustainable practices and products, on the whole, just aren’t attractive to consumers. It’s easy to convince people to care, but harder to continue that concern into their wallets. There are a few examples however, where sustainability and consumerism have collided and both have emerged better off. One such example would be the recent increase in the sales of fuel efficient vehicles, growing over three times as fast as the rest of the vehicle industry4. This combination of sustainability and commercial desirability is key to increasing the amount of people who, whilst in this case for purely selfish reasons, have a positive impact on sustainability.

References and Further Reading

1: How to Sustain Sustainability

2: Harris Poll on Green Living

3: Widespread Criticism of Rio Environment Summit

4: Fuel Efficient Car Sales Growth

Model Driven Disadvantages

Proponents of model driven development seem to spout countless benefits, “it captures intellectual effort more effectively”1, “it bridges the gap between business and IT”2, “Models offer greater extensibility and portability”3. This weeks panel presented a very interesting look at model driven development, specifically, what some of the issues associated with it are, and why it hasn’t had the impact some hoped it would. Shayne offered a fairly in depth exploration of, at a high level, why model driven approaches to engineering, and software development, are not ready for the prime time. His focus was somewhat abstract, not really touching on problems with model based approaches that manifest themselves at the implementation level. In this weeks entry, I’d like to have a closer look at some of the criticisms of model driven approaches at this level, in an attempt to gain a better understanding of why this approach hasn’t seen widespread adoption.

“MDD introduces a lot of rigidity”4

Abstraction is a powerful tool, it’s one of the key factors in enabling large software systems to exist. The tradeoff is, of course, a reduction in flexibility; you can’t manipulate memory in Java the same way you can in C. As technology has advanced however, the need to manipulate that memory has dissipated. So to do I believe, that the need for high levels of flexibility within large systems, where MDD makes the most sense, will also begin to reduce. We’re starting to see this now, with businesses and government departments favouring commercial off the shelf software over bespoke productions. This shift has been driven by the reduced cost and increased reliability outweighing the flexibility of custom software 5. MDD’s attributes are well positioned to take advantage of this shift in priorities; I predict it won’t be long before the loss of flexibility is more than made up for in the increased efficiency.

“Development isn’t the slowest part of developing software, deploying and taking it into production is”6

One of the key benefits of model driven software engineering, is that the models themselves are platform agnostic. In theory the translation tool that generates code from the models is supposed to fill in this gap, and whilst current tools do a decent job of creating code, they are unable to assist in actually getting that code running effectively6. Deploying the generated code to a production environment requires, amongst others, security, infrastructure and corporate policy considerations. In a traditional approach, it is argued that these considerations are captured and implemented during the development process. It’s a tough issue, you’ve just generated this web of code, but now you need to correctly position it over the resources. I believe Shayne’s video presented a way of thinking that may help solve this issue. If we treat the deployment environment as a domain that can be modeled, as well as the actual software, then a specification archetype can be generated detailing how these models can be “woven” together7. This interfacing preserves the agnosticism associated with MDD; the same programmatic model can be combined, via a specification archetype, with differing infrastructure models.

MDD has a long way to go, I don’t believe anyone would argue otherwise. I also believe that it is the next logical step in the evolution of abstraction. Future large scale systems development demands another layer of abstraction. Without one, just as the assembly programmer is unable to create enterprise level software, we too will be ill equipped to handle the requirements of the future.

References and Further Reading

1: Dr. Clive Boughton’s slides

2: Why you should start using Model Driven Development

3: Reasons to use Model Driven Development

4: Why Model Driven Development is Dangerous

5: Software Reuse and Off the Shelf Software

6: Why Model Driven Software Devlopmen Isn’t Fast Enough

7: Shayne’s MBSE slides

Our Place in the World

This weeks panelist discusses the idea of system architecture, in particular, the role of solution architects in complex IT systems. The architects of a system are charged with understanding and controlling the complexity challenges that arise when creating these large, systems of systems. This particular aspect of information technology is the one I hope to eventually end up working in. Partially because I enjoy the challenges linked to working with, and developing large systems, but primarily because of money; system, solution and enterprise architects simple make more than their software engineer equivalents1.

My path to arrive at university was different from most, after four years of low paying unskilled labour jobs, I decided enough was enough and that a degree was the gateway to a greater salary. This particular motivation is never really touched upon at university, for whatever reason academics are generally reserved when it comes to talking about potential salaries. We, as students, get some indication though, but it comes through guest lectures, through people outside of the university and only offers a snapshot of the current career environment.

I think this speaks to a larger issue I’ve been experiencing. In this course we’ve been exposed to complex systems that encompass many professionals, some university educated some not. Some with decades of experience, some with only a handful of years. Where do we as university graduates fit in? Where are we, as ANU software engineering graduates, best utilised? Certainly we could let our future employers decide the correct position, undoubtedly most of us will go down this path. In doing this though we make the assumption those employers understand our education. Certainly some of the speakers we’ve had have noted a preference for ANU students primarily because they themselves are ANU graduates and have some knowledge of the curriculum. We cannot however expect this understanding from all potential employers.

You could argue that because our degree is accredited by Engineers Australia (EA), a baseline is provided to employers that outlines a graduates minimum acquired skills. However, looking at the publicly available accreditation page2, EA only briefly mentions the benefit to employers. Instead focusing on relevance to international engineering governing bodies, universities and the government. Going further and looking at the stage one competencies, those which a graduate from a certified degree program will have achieved3, one finds a very qualitative list of requirements. More focused on describing how we can do things, rather than what we can do.

It’s hardly a very pragmatic documentation of a university graduates skills, and it certainly doesn’t help the illuminate the potential positions we can effectively fill within a system. I think in some ways this lack of tangibility promotes the mind set that some of the graduates have run into in the workforce, namely the “forget all that stuff they told you at University, this is how we do it in the real world”.

References and Further Reading

1: Solution architect salary

2: Engineers Australia Program Accreditation

3: Stage One Competency Standards for Professional Engineer

A Case for Safety

This week, we all took time out of our much deserved break, to attend the panel session regarding system safety. To me, the most interesting idea presented was the concept of safety cases. Structured arguments, supported by evidence, intended to justify that a system is safe1. David Pumfrey, the panelist, offered a legal case as a point of comparison. This immediately set alarm bells off in my head. Are we really considering a system, whereby half those involved (by definition) defend the wrong side, as a good model for validating system safety?

It’s even worse than that though, at least with law both observers and those involved understand the rules of the game. This understanding ensures, that what whilst the system is not perfect, it performs reasonably well. Such considerations are not applied to safety cases; rather than two parties trying to convince an impartial third, they generally involve one highly invested party attempting to convince an assortment of potential affected groups. The safety case process is only effective when all parties involved possesses both the required technical aptitude, as well as irreproachable moral qualities. This is not the realm of law though; surely we can trust that as professionals in an engineering context, the individuals and groups involved display these traits?

To put it bluntly, no. Even when we narrow the scope to those responsible for generating safety cases, normally the engineers involved in the building of the system, we still find they display fundamental inadequacies in these areas. An MIT study conducted in 2012, found that confirmation biases significantly affected the quality of arguments provided for, and the relevance of evidence used within, safety cases2. It makes sense though, a team of highly intellectual and motivated people have been handed the task of arguing that a system is safe. You’re not going to get a response that highlights any particular safety flaws, because that was not the task to which they were assigned.

Perhaps we could just change the question asked? Instead of asking to prove that a system is safe, why aren’t these professionals being asked to simply determine whether or not the system is safe? Bluntly put, this doesn’t work either. There is no doubt that the team handed this task has a vested interest in generating an outcome that shows it is indeed safe, after all they’re the one responsible for its construction. Given this predisposition towards showing safety, the questions are really one and the same. So what can we do about this? The MIT study shows the solution is an independent investigation of the system, performed similarly skilled set of professionals, but with the goal of displaying the systems safety inadequacies. This generates a much stronger understanding of the systems safety properties.

Perhaps I was too hasty to dismiss the law as a model for assessing safety. The concept of assigning two parties diametrically opposed ideas, and having them investigate and provide evidence for that idea, may be the best way to go about exploring system safety. Not as a way to provide a black or white answer to whether a system is safe, for which only a trivial system has an answer. But to generate a balanced dialog about that systems safety properties, from which they may better be explored and understood.

References and Further Reading

1: Safety Cases

2: The use of Safety Cases in Certification and Regulation

The Weakest Link

Last week I talked about how both road engineers and software engineers use abstraction to compensate for a lack of human concentration. It was interesting to see how seemingly largely different disciples applied the same principles to circumventing human fallibility. This week I want to spend some more time exploring this topic; how systems in different fields predict and prevent human error is such an important aspect of systems engineering, I felt it deserved a further look.

Up to 80% of maritime accident1, and 60 to 80% of aviation accidents2 are attributed to human error. These are staggering statistics and they highlight just how much of an issue the human link is when dealing with system safety. In my last entry, I fingered some of the blame at our fundamental inability to effectively divide our attention; whilst this is certainly one component of human related system failures it is definitely not the only one.

In a report into how complex systems fail, Richard Cook of the Cognitive Technologies Laboratory in Chicago complied a list of the most common causes for system failures3. The most interesting and I feel overlooked factor he highlights is the dual nature of the operators in these systems. System practitioners operate systems in order to produce its desired product; the system must do something useful. However, they also work as defenders of the system against failure; they must ensure that the system operates safely. Dr. Cook underlines how during normal operation an emphasis is placed on production, yet after an accident the focus moves to the operators defense against failure role.

Such an example of this was the KLM Air aviation accident in Tenerife we explored in the (wonderfully run) tutorial in week 5. The KLM pilot was working under new regulations regarding flight times, a company enforced focus on system production, which influenced his decision to rush the takeoff procedures leading to the worst aviation disaster of the century. With the benefit of hindsight it was clear to us in the tutorial that the pilot should have followed procedures regardless of production. It was in doing this that we fell into the trap that Cook explains many post accident investigation fall into, after the accident we placed all of the emphasis on safety without any concern to production.

It is easy to condemn the actions the KLM pilot made as an unacceptable gamble, but in doing this we forget that all actions in a large system are gambles. With the benefit of hindsight any ambiguity about the results of such chance manoeuvres are removed, in this light of course they are appear to be inappropriate. By understanding these influences on the human factors within complex systems and giving proper regard to the dualistic and risk based path those humans must tread, we can form a more holistic picture of why human error occurs and how we can systemically reduce its likelihood.

References and Further Reading

1: Safety in Shipping

2: Analysis of Military and Civilian Aviation Accidents

3: How Complex Systems Fail

The Road to Abstraction

Watching an experienced team of pilots ignoring a warning about their rapidly decreasing altitude, because they were too preoccupied with a burnt-out landing gear indicator, is a scary demonstration of the limitations of the human attention spectrum. This week’s panelist, Dr. Robyn Clay-Williams, explored the human aspect of complex systems, with a focus on what happens when this critical component breaks down. Eastern Airlines flight 401’s crash serves as a dramatic example of this, there is however a more benign example of a system where a dearth of human perfection is apparently obvious, a system that most of us encounter on a daily basis, roads.

75% of all road deaths in Australia can be attributed to human error1, with 1,193 road deaths in 20132 it’s not a hard argument to make that measures to reduce the impact and frequency of human mistakes on our roads are needed. A slightly harder argument would be that the best way to do this would be a reduction in the amount of signage present on our road systems. Yet a harder argument, would be that the removal of all signage, road markings or physical divisions between pedestrians, cyclists and motor vehicles, could reduce the number of fatalities on our roads.

This idea of shared space, initially championed by the late Hans Monderman, runs on the ideology that the only way to make an intersection safe, “is to make it dangerous”3. By removing the traditional road infrastructure adjuncts, proponents claim that a drivers attention is no longer divided and they are free to focus their attention on “negotiating movement via eye contact”3. These changes are having measurable impact, with reductions in fatalities at several intersections across Europe where the technique has been implemented. The UK town of Poynton has recently converted a large intersection servicing over 26,000 vehicles per day into a shared space and is experiencing an improvement in overall throughput4.

Reducing the complexity of a system to cater for it’s human component is a process software engineers are well acquainted with. By abstracting away from lower level details and compartmentalising functionality, developers are able to reduce the number of concurrent demands on their attention. This ideology has been a common thread through software development, almost since it’s very beginnings. It’s exciting to see the same approach to dealing with the limit’s of the human mind applied in a completely different arena. The concept of shared space is still a young one and it will certainly be interesting to see if it can have the same, long term, defining impact as it’s complementary ideology has has on computer science.

References and Further Reading

1: Australian Road Death Statistics

2: 2013 National Road Toll

3: WIRED Article on Hans Monderman and Shared Space

4: Article on Poynton’s Shared Space implementation

 

Stereotypes

Body language is such an important aspect of human interaction; subtly, it can communicate so much more than words alone. In this week’s panel Derek Koina’s non-verbal communication indicated that he was, perhaps not the most confident presenter. Retracting his physical presence into folded arms, his only connection with the audience a series of nervous glances. It was a jarring experience when contrasted with the previous panelists, all of whom seemed quite at home in front of an audience. The idea of communication being critical to success in not only in systems engineering but across all projects is repeated almost to the point of rhetoric. Yet there in front of me was a successful systems engineer who was failing to initiate any sort of meaningful communication with his audience.

To my eyes, during the presentation Derek embodied the “stereotypical” engineer. More concerned with developing the latest missile defence systems, rather than exploring the nuances of human interaction. This may seem like unwarranted maliciousness, after all presentation skills do not make an engineer; Derek’s impressive career is certainly a testament to that. So why dwell? Because during his presentation I couldn’t help but think how Derek’s presentation was a reflection of how parts of the professional community and the public at large view engineers, and especially software engineers.

This perception that, on average engineers possess below average communication skills just doesn’t stack up. Disseminating ideas and interpersonal abilities are key competencies in engineering, especially in the systems context. Clear communication is, of course, beyond critical for ensuring all the parts come together cohesively. So, from where does this misconception stem?

In an article for Engineers Australia, Nikki Mead, an engineering graduate who has recently entered the work force suggests that her colleagues almost enjoy the stereotype. “The easiest way to get a laugh out of a bunch of engineers is to poke fun at their lack of social skills [or] their dullness.”1 In her experience they certainly aren’t actively trying to shake off the cliché. This inactivity leads to an almost self-fulfilling prophecy, I’ve had numerous lecturers recount the anecdote of a student choosing engineering or computer science because they didn’t want to be forced to interact with people. It’s unsurprising then to hear industry lament it’s recent graduates communication and team work deficiencies2.

Engineering is, now more so than ever, a people centric occupation. If the engineering profession is to attract people well matched to it’s requirements then the image it presents needs to change. Engineering needs to begin promoting itself as diverse profession that yes, involves a lot of technical skills, but also incorporates a high level of social interaction.

References and Further Reading

1: Nikki Mead’s article for EA

2: Why Industry Says that Engineering Graduates have Poor Communication Skills

Ethical Routing

The topic this week was the engineering context; the role of engineers and engineering against a systems backdrop. Geoff Patch of CEA Technologies was enlisted to give some perspective on how software engineers fit into this broader systems space. Whilst this may have been the original goal of the talk, large sums of time were spent discussing how CEA’s latest and greatest radar missile defence system worked and how awesome it was to blow stuff up. This struck a chord with me, yes of course missiles and explosions are awesome, but conspicuously lacking from the talk was any reference to the ethical connotations of the system.

During the QA session at the end I asked Geoff about the companies ethical concerns, his response focused on how the consideration of ethics came down to the individual team member. He said that he could understand a person’s reluctance to work on a system that had the potential to kill, and had experienced job candidates declining offers because of it. He rationalised his own choice by saying that CEA built for the Defence Force, and that he believed that their system would only be used under the correct circumstances.

Whilst CEA are certainly scrupulous with their customer base, not as much can be said for many other companies. Cisco’s involvement with the creation of China’s “Golden Shield”, it’s nation wide internet monitoring and control system, has been coming under heavy ethical criticism for the better part of a decade. The Electronic Frontier Foundation has lead the charge in calling for companies like Cisco to be held at least partly responsible for the usage of their products, urging human rights to be a factor in business negotiations1. The non-profit organisation has drafted a set of guidelines they urge companies to follow when dealing with authoritarian governments, with the aim of helping technology companies “avoid being repressions little helper”2.

Where in all of this, does the individual software engineer stand? Can they be expected to share in the ethical concern generated when the product they create is used to subdue human rights? It’s hard to imagine a potential Cisco employee considering the ethical ramifications of router or firewall software. When the destructive usage of a product and it’s creation are so far removed it creates this large moral grey area. In this murkiness, clear cut overall answers to questions about ethical responsibility are never going to emerge.

I would argue that they don’t have to, in many respects I agree with Geoff’s perspective about ethical decisions falling to the individual. There is an important caveat here, to make an informed decision individuals need just that – information. In response to ethical concerns Cisco made a statement before the House International Relations Subcommittee3 in which they outlined their position clearly. They would continue to sell their products to whomever could afford them, but they would not customise them for the purpose of repression.

Armed with this information about Cisco’s intentions individual’s are able to develop their own standpoint, their own answers to these murky questions. I believe that it is the collectives obligation to push for these inquiries in order to make these details available, but the ultimate decision can only fall to the individual.

References and Further Reading

1: Cisco and China’s Human Rights Abuse

2: Know Your Customer Sales Standards

3: Cisco Testimony