BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Speed the Right Way: Design and Security in Agile

Speed the Right Way: Design and Security in Agile

Bookmarks
48:26

Summary

Kevin Gilpin discusses the renewed focus of the software design process and code complexity in software security, describing how design review can be modernized to help improve application security.

Bio

Kevin Gilpin is an enterprise software engineer with over 20 years of experience spanning various industries including healthcare, automotive, logistics, and life sciences. He was recently CTO of Conjur, then CyberArk Fellow following the acquisition of Conjur by CyberArk in 2017. He is a pioneer in the adoption of DevOps, cloud, and containers in the enterprise.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Gilpin: I just noticed that we're in Whittle [room]. Does anyone know what Whittle is famous for? He invented the turbojet engine, which is pretty awesome because I like jet engines.

Does anybody recognize that plane? That's called the X-15, only three X-15s were ever built, flying 199 test flights between 1959 and 1968. The X-15 set the official world record for the highest speed ever recorded by a man-powered aircraft, mach 6.7 at 102,000 feet, which is 4,530 mph, and that is still the record. Needless to say, the engineers and pilots were taking a big risk with the X-15, so they paid careful attention to safety during design and testing.

Building secure software is also high-risk engineering because the penalties for mistakes are high and getting higher. For the last 15 or 20 years in the software world we focused on Agile methods, flow and continuous delivery. These capabilities were used and inspired by manufacturing processes used to build millions of cars, but designing secure software is not like building a million Toyotas.

When you're creating a new design, you're entering a world of unknown unknowns where you don't even know where the risks and threats will come from. I think that designing secure software is a lot more like building X-15s in which each detail really matters and mistakes are very expensive, sometimes unrecoverable. I think there's a lot we can learn about building more secure software by looking at the lessons learned by engineers in other disciplines and I'm using the X-15 as a symbol of that belief.

My name's Kevin Gilpin, as you heard, most recently I was founder and CTO of a DevOps security company called Conjur which was acquired by CyberArk in 2016. Previous to that I spent about 20 years in enterprise software engineering, working in fields such as healthcare, drug discovery, consumer web applications for the automotive industry, and transportation logistics.

I was educated as an aerospace engineer at the Massachusettes Institute of Technology, even at that time software played a large role in engineering. I learned to program in C++ as part of my engineering coursework, that's how I got my start in software. As you'll see in this presentation, I'm still enthusiastic about aviation.

As a founder of Conjur, I set out to help automate routine security tasks, so I'm a believer in the ability of automation to improve security. However, it's clear to me now that the complexity of security issues is beyond what we can remediate through automation. The human elements of analysis and collaboration remain critical. My focus these days is really on assistive and collaborative products because once common flaws are engineered and automated out, then design-related issues are the next level of challenge.

Thanks to GDPR and other mandatory reporting laws, we're getting more and more insight into the causes of security breaches. We've become used to seeing issues like unpatched software, leaked credentials, and OOSP flaws lead to big problems. In 2018 we started to see a remarkable new theme emerge in breach notifications which I will call "blame the programmer." What happened this year? Attackers were able to exploit flaws in the design of the software rather than low-level technical details. In some cases, the data leaks were so egregious that the attackers would not need to perform any penetration of the system at all.

In this talk, I'll discuss how security must be considered in the design of modern applications. First, I'll point out some specific breaches and analyze how flaws have contributed to them, then I will describe security considerations which should be examined during design reviews continually through the Agile life cycle. Finally, I'll discuss how to modernize the way that design reviews are conducted and how to expand design reviews to include security experts and other stakeholders beyond the engineering team.

Complex Systems Fail in Complex Ways

This is an accident from 9th November of 1962, as tragic as they are, this is an X-15 crash. Airplane crashes provide an important opportunity for learning. Those opportunities are not wasted in the aerospace community, each accident is immediately investigated in painstaking detail in order to determine the causes of the accident. Aviation accidents always have more than one cause because every system is redundant. Every system is redundant, and yet there are still accidents because unanticipated situations continue to arise no matter how carefully the vehicle is designed and tested.

Mishaps and accidents are inevitable, and the best we can do when they occur is learn as much as we can from them. Once an accident investigation is concluded, engineers determine whether design changes should be made to the airplane in order to improve instruments, diagnostics, margins of safety, and redundancy. Needless to say, in software development, we have plenty of accidents to investigate as well.

Next, I'll be talking about some specific breaches. Before I do that, I want to make it clear to everyone that I'm not trying to pick on anyone, I've made mistakes of this nature myself but I'm not at liberty to discuss most of it so I picked mistakes that looked like the ones that I've made. I don't feel particularly bad about the mistakes that I made and you shouldn't either, accidents are part of being on the frontier. However, we do need to be honest about looking at our mistakes and learning from them.

Speeding the Wrong Way

In September of 2018 a multifaceted design flaw in Facebook allowed attackers to "hop, skip, and jump" their way into generating access tokens for millions of Facebook users. The fault, the vulnerability was introduced by a feature called "View As." A privacy feature that was designed to let users see their Facebook pages as other users would see them.

When a user activated View As, the Facebook site would load all the widgets on the page while assuming the identity of another user. This was meant to be a safe operation, however, an unexpected interaction occurred between the View As feature, the video upload service, the authentication service, and the browser. The View As feature was meant to improve privacy but instead it resulted in an embarrassing and expensive breach. Here's a closer look at how it happened, tracing it step-by-step.

First, the user logs into the website and uses the View As feature typing the name of a user to impersonate. View As assumes the role of the other user and loads all the components of the logged in user's Facebook page, one of the features loaded on the page is the video upload service. When the video upload service is activated by View As, it mistakenly obtains an identity token for the assumed user rather than the logged in user.

The token of the assumed user is then placed in the browser where the logged in user can easily obtain it. Attackers exploited this vulnerability by logging in as themselves and repeatedly using the View As feature to assume the identity of other users. Identity tokens for those users were extracted and used to log in to other parts of the website such as the account settings page. The attackers didn't have to attack the website or the users, the site simply made the access token of any user visible to any other user.

I don't have any privileged insight into the Facebook breach, but here are some observations about the design which I've made from publicly reported information. First, I think it's safe to say that the video upload service is overprivileged in its ability to obtain an access token for any user. A more failsafe design would only allow services like video upload to obtain an access token for the logged in user.

Next, when looking at View As, the widgets on the Facebook page are essentially untrusted code because they haven't been certified to be safe and compatible with View As. When executing untrusted code, one would typically put that code in a sandbox to keep any security issues from leaking to the outside world. By executing untrusted code directly in the browser, the design allowed the token leak to have a serious impact. Note that none of these design flaws alone could have caused the breach, only by activating three flaws in concert does the token leak occur.

As I mentioned earlier, aviation accidents always have more than one cause. Accidents occur when multiple flaws occur in an interacting way that hasn't been observed or wasn't considered probable enough to worry about. This type of scenario in which good design ensures that no single flaw alone can cause an accident is common enough in engineering that it has a name, the Swiss Cheese Model. In the Swiss Cheese Model, each flaw is represented by a hole in the piece of cheese. No individual hole passes all the way through the cheese, however, when the holes line up in an unexpected way, there's a path through the cheese and the damage occurs.

We can make a Swiss cheese model of the Facebook View As breach using three holes. First, the over-privileged video upload service which fetched tokens for the wrong users. Second, the lack of a whitelist determining which widgets would be executed by View As, and finally, the lack of a sandbox.

UK NHS Breach “Type 2 Opt-Out”

Example number 2, another blame the programmer incident occurred with the type 2 opt-out system in the UK National Health Service, some people here might have been affected by this directly. Using type 2 opt-out, each patient can specify that their medical data can only be used for their own care, and not for commercial or even research purposes. The design specifies that the software at each site such as a hospital should submit the patient's opt-out selection along with the patient data to the NHS. The NHS software doesn't use the patient data for non-treatment purposes if the opt-out election is made except, the NHS failed to honor the opt-out request of about 700,000 patients between March 2015 and June 2018, a period of more than three years.

Here's how the flaw occurred, the design required the software at the medical site to obtain the opt-out election by each patient and send the opt-out to the NHS along with the patient data. How could this data leak occur? Well, in this case, it's very simple, a particular vendor's software obtained the opt-out election but failed to submit that election to the NHS when submitting the patient data. Therefore, the NHS did not know that the patient had opted-out and the patient's data was distributed liberally.

Based solely on public information let's take a look at some design flaws that seem to be present in this example. First, what's the basic use case that we're looking at here? We're talking about granting data access to third-party applications. Most of us are probably familiar with doing that sort of thing, allowing applications to interact with the data that we own and keep in places like Google, Slack, and GitHub. There are some well-known solutions to this such as OAuth 2, we want access to our data to be denied rather than allowed by default. If opt-out had adopted a design of default/deny, then the design could've ensured that if opt-out elections were mishandled, data would be under-shared rather than over-shared.

Think about how long the breach went on for, more than three years. Allowing the leak to continue for such a long time indicates that there should've been a time limit on the data-sharing election. Each user could've been asked to review and renew their choices on a periodic basis, perhaps annually. It appears that the users were not being informed about how their data was being used. If users were receiving reports on the usage, it's likely that even if only a small number of them were paying attention to the reports, somebody would've noticed much quicker that their data was being used improperly.

To bring it back to the Swiss cheese model, the opt-out election wasn't sent from the hospital to the NHS, the NHS distributed data liberally by default. The user's data sharing choices had no expiration and no renewal, and the users didn't receive information about the usage of data that would've enabled them to raise the alarm.

Back to the Drawing Board

When a machine fails spectacularly, we say, "Back to the drawing board." When this phrase was coined, the drawing board was literally the tool used to create designs. Engineers knew that the machine as built would faithfully follow the design, so drawings enabled them to understand and relate to the systems they were designing. What do our drawing boards look like?

Whiteboards, everybody knows about these. One problem with whiteboard diagrams is they make a lot more sense to the people who watch them being drawn. It looks like a picture like it's just a picture, but in fact, the drawings are made in a certain order, and the order in which the boxes are and lines are added usually carries a lot of information about how the system works. For example, what happens first?

This information is lost when looking at a static picture of the final result. Also, drawing is pretty slow and people in meetings can talk pretty fast. When you're trying to draw in real time to a roomful of people, drawings can devolve into scribbles. At the very least there's usually a lot of conversation about the drawing that isn't captured in the drawing itself.

How We Communicate

Chat. Here's a design discussion in a chatroom and I wonder how familiar this looks to people. In fact, I'm definitely interested to know all the different ways that people are tackling this today, talk to me after if you want to chat about that. This kind of method works pretty well for people who are aware of it and focused on it, but it's probably happened in an ad hoc way, the nature of chat is that it's just a conversation and sometimes design discussions just happen which is ok, we wouldn't want to say that people can only talk about design at certain times and according to certain rules.

If you're someone who misses the stream of pictures and text in chat it's really hard to catch up later. I've heard this complaint frequently from people who don't spend all day in the chatrooms the way that developers do, and that would be folks like pre-sales, consultants, product managers, and security teams. They want to participate but the method and the medium don't work for them.

Wiki. Normally I see Wiki's used to record institutional knowledge that needs to be written down somewhere. Someone says, "This should go into Wiki," and then maybe it does. Wikis are actually great for recording encyclopedic knowledge, Wikipedia. If you have a great Wiki then it's probably an excellent place to archive your knowledge and design artifacts. It's not a great place to create them because Wikis aren't great for having live, active discussions. The UI for the editors isn't really very good, and it's pretty much impossible to work with Wiki editing on your phone and mobiles are big now.

Developer in me says, "Let's just use code." We have pull requests, everything funnels through them, they're collaborative, it's a well-understood part of the process but it's a bit like walking through the factory floor in order to discuss the design of the airplane, it's too much detail. When you use code as the design artifact, then you exclude non-programmers and you exclude a lot of people who might have something valuable to say.

The Cognitive Artifact

What we want is something that's called a "cognitive artifact." If you want people to understand your thinking, then we need to invest in these clear and accurate diagrams and keeping them up-to-date over time. Cognitive artifact means an object or drawing which people use to organize their collective thinking and problem-solving. This is a UML sequence diagram for OAuth 2, in my experience, sequence diagrams are useful for working with people in non-coding roles like security and product management because they have...that's the Goldilocks Quality. They're technical enough to accurately model the software, but they're not so technical that you need programming expertise to understand them. Along with what's called component or whiteboard-style diagrams, boxes and lines basically, I found sequence diagrams to be useful in design documents that I've created myself.

What makes for a good cognitive artifact? First, it needs to use a medium or platform that can accommodate the scope of the problem. When building the X-15, some of the drawings were 20 feet long. Try not to let the design be constrained by the medium you choose. If you saw yesterday's talk on using Java to plan spacecraft orbits, they use custom Java software to visualize hundreds of thousands of possible orbits simultaneously, that's the kind of thing that our tools are capable of.

A lot of software design tools look and feel like they're from the '90s or earlier because they are, we need more powerful software design tools. A successful cognitive artifact will also be built on a collaborative platform, it's not handed down on stone tablets. Pull requests in DevOps are all about collaboration and designer views work the same way. We want to beware of single-user tools like emailing PowerPoint around. You add friction and degrade the process because your collaborators need to be able to make comments and even changes.

You want to keep the cognitive artifact next to the code so that it can be changed and updated along with the code. If the code is in one place and the design is somewhere completely different, then they deviate. You want to make sure people find and work with cognitive artifacts actively during their normal flow of work. For example, links in the readme, links in the pull requests, simple stuff like that. Then keep in mind that the quality of the product will probably be as good as the design, as the cognitive artifact. A brilliant coder cannot save a weak or flawed design, so in Agile process, it's critical that cognitive artifacts are created and reviewed ahead or at least along with the code being written.

Simple Formula for a Design Document

On a personal note, this is a simple formula that I've used pretty successfully for making design documents. Generally, include the following sections that you can see on the slide, overview, diagrams, design discussion, API spec and Q&A. The diagram is the most accessible section, and that's where I get all the feedback, especially from a broader group of people. In second place is the API spec. Even though I put a lot of thought and work into the prose, the design discussion, and overview, it doesn't have the same power as a cognitive artifact as a diagram and the API spec.

Another tip is to use RFCs as much as possible, this is a suggestion I've gotten a lot over the course of my career and it's always been very helpful. Even if the existing RFC doesn't match what you're doing exactly, you can gain valuable insight by comparing what you're doing to an RFC and then you can explain your design in terms of hopefully, minor differences from an RFC rather than as completely from-scratch design. If you want people to be able to come together, understand a problem, and provide useful input, think of the cognitive artifact that they will all gather around and focus on. Invest in the cognitive artifact and then your investment will be repaid with engagement and good suggestions.

Visualizing “As Designed” Versus “As Build”

Let's take a closer look at why design diagrams are often out-of-date. Once a team makes a mental switch from design to coding, I hate to use that word but, they inevitably discover that the design won't work exactly as planned, or even close, so they make adjustments in order to overcome whatever obstacle they may be facing. This in itself is ordinary and natural since we can never anticipate all the issues that we will face when we put fingers to keyboard and start coding. The problem is that the design changes that are made implicitly during coding are not reflected back to design artifacts and that's how they get out-of-date. We need clear design artifacts that are up-to-date and be used for threat modeling and for discovering potential security flaws.

Now updating design documents sounds like, and probably is, tedious. If we want to be able to understand the design of a product, 1 or 5 or 10 years in the future, it's going to be essential to have this information. If you have a good catalog of designs, I'll think you'll find a lot of great uses for them. For example, you can onboard people into your organization faster because it's easier for them to understand what's existing then you can move people between projects more quickly which can be really critical as well. In any case, there's a saying in software, "Always code as if the person who will maintain your code will be a violent psychopath who knows where you live." Invest in the visual artifacts of the design and invest in keeping them up-to-date.

Get More Eyes on the Problem

Back to planes, briefly, the plane on the left is a chase plane. The purpose of this plane is to observe and provide feedback for the pilot on things the pilot might not otherwise know, it's like a double check. For example, during the X-15 rocket startup, the pilot flips a switch to prime the rocket motor, and when he does that a visible puff of smoke comes out of the engine that that pilot can't see. The pilot confirms with the chase plane that the engine is primed before proceeding with the engine start checklist. This procedure provides extra redundancy and safety which is useful because this whole part of the plane is 15,000 pounds of ethyl alcohol and liquid oxygen, so you want to be careful when you light that.

Software projects can blow up as well, the Facebook View As feature was deactivated and has not been reactivated yet. The type 2 opt-out design has been completely scrapped and is being replaced with a new design called global opt-out. All the code written to support View As and type 2 opt-out and not just the code but the mental energy and the dollars and all of that is wasted and in fact, those projects probably created negative value, unfortunately.

Widen the Circle to Make Design More Accessible

Continuing the analogy of the chase plane, there's a man named Scott Crossfield who flew the X-1 and the Skyrocket rocket planes for NASA. He felt that there was a major issue with experimental planes which is that they were technically capable but difficult for pilots to operate. They were being designed by engineers without pilot input, so he became that pilot input, he left his active flight duty to join the X-15 development program. He stayed and worked on the program throughout the development to ensure that the pilot's point of view was taken into consideration and he made many design suggestions to improve efficiency and safety.

When do you want to have an expert like Scott Crossfield helping you make your design better? Remember that each person in your organization and here we're talking about security but it's generalizable has a unique perspective on the software as Linus Torvalds said, "Given enough eyeballs, all bugs are shallow."

The best way to produce a robust and secure design is exposing your design to people with unique perspectives and useful perspectives. In order to do that, as we've been discussing, you need to make your design proposal accessible and reviewers need to understand what's being asked of them because you can't just ask everyone to be a security person and an engineer and an operator and marketer and a product manager, which is what DevOps sometimes sounds like to me.

Elements of a Design Review

In terms of being specific, design review, what I'm proposing here, is not a new idea but, maybe it's gotten a little bit lost. It's a clever tool which transforms individual risk into shared responsibility, it does that for design just like the pull request does that for code. I think the coolest thing about a pull request is that a pull request makes it ok to critique someone else's work which was always a really sensitive subject before they came along, I think. They do that because they relieve the author of full responsibility for the end result.

It's like a bargain, like I'll let you call my baby ugly but now it's your baby too. The design review is just like a pull request but less technical, it should be demonstrated in a non-technical way, in other words, other than by reading code. It's reviewed by a domain expert buddy, your Scott Crossfield in the chase plane. Its purpose is to make sure that the design is good for whatever definition of good makes sense in your current iteration before code test deploy gets started in earnest.

The riskier the component or project, then the more intensively it needs to be reviewed, we'll talk about that in just a second. I've added this bit since coming to the conference there's lots of talk and enthusiasm still about microservices and I think it's worth pointing out that isolating security functions into microservices not only isolates the blast radius of outages and such, but it also makes it easier to ensure that the most security-sensitive code gets the most attention. Thumbs up for microservices that implement security and also security libraries on which you standardize.

There's a lot of attention paid to coding, testing and deployment, but not enough to get designed. Design will become the security constraint as evidenced by well-automated companies such as Facebook which has plenty of unit tests, plenty of code scanning, plenty of security experts, plenty of super smart developers still making mistakes. It's not about having smarter programmers or doing more jobs.

Design Review: Who And How Much?

We want to get stakeholders involved. What's the right level of involvement for the task at hand? Here's a simple model that breaks down the importance of design review into three buckets according to an assessment of risk. The first bucket is your known knowns, these are standard design patterns such as RFCs. You've done this before, it's familiar, lots of smart people worked hard to create this design. Lots of people internal and external are using it and you can trust it.

This is the best place to be, which is why I mentioned RFCs earlier as well. In this case, basically what you need to do is clearly indicate which design you're using, and then make sure you implement it correctly. The design effort required here is fairly small, although you'll want good test coverage to ensure that you've got the implementation right because after all, this is security code.

Suppose, case number 2, there are some internal libraries, internal standards or there's already an adopted pattern within your organization or a well-regarded and active open-source project that you can use, let's call this known unknowns. The design's vetted, but you still have to be careful you're applying it correctly. You have a fair amount of code to write, you may want to get a crypto audit or a pen test before you release it, and it's worth an investment in design diagrams and design reviews to make sure you're understanding it right and applying it right. There can be sharp corners here, especially in security work. It's an unforgiving place to develop, so if you don't understand the meaning of something, find someone to help. Be sure you can succinctly describe the problem you're solving and the questions you have. If you've thought of more than one possible solution, then describe each of them.

Finally, over here on the right, we have the real problem area, the unknown unknowns. This where you're in truly new territory implementing a new security-sensitive design and you don't have an RFC or a well-vetted library to base your work on. Validating new designs is not a matter of writing tests because tests only verify that the code follows the design. They don't verify that the design itself is secure.

Looking back at the Facebook breach, impersonating other users within the system is a high-security risk feature, there isn't a well-known design for how to do this. When you take on a feature like this, it's going to require some effort and that's ok, it needs that effort and more people involved to make sure that the design is secure and we have to take on the monetary and scheduled costs of doing it. Because if it's not worth the cost then don't do it because you really need to think these designs through and we need all the necessary eyeballs looking at them.

Key Takeaways

Security design flaws are not bugs, bugs are things like keystroke errors and low-level logic errors, a bug is an unintentional deviation from the design. In this talk, we're talking about design flaws and not bugs, easy to find bugs we will test or scan for. Design flaws will not usually make test cases fail because the code is faithfully following the design which is unfortunately flawed.

Remember, it's not too common for a bug to completely sink a project but design flaws destroy projects, Facebook View As was written off, and type 2 opt-out suffered the same fate. As companies mature through the DevSecOps cycle, when you get to the mature side of it, design problems are the things you'll need to focus on. Design is the new threat surface, so we're talking about getting more eyes on design.

Agile flow and continuous delivery have taken us away from using visuals to work on our designs, but design diagrams and designer views and the conversations that they spark, do help identify problems, so invest in those cognitive artifacts and the designer view processes that create them. It's not especially fun to update design diagrams and documents to reflect as built versus as designed, but we need to do our best to ensure that the design artifacts reflect reality because you'll lower the risk of your business in multiple ways. Most importantly understand where the security risk lies. Be aware when a design is entering the security territory, be willing to make the effort necessary to robust and secure design. Rely on the best resource you have to do that, which is the diversity of experts in your organization.

One more thing before I'm done, you see I'm into aviation, so on Sunday I went up took the tube up to the RAF museum in Colindale and this is a photograph of a photograph from the World War I room, it's a great museum by the way. Here are some things I really liked about this drawing, first of all, it's big, it's big and bold. The room is big, the space is big, the desks are big, the stools are big, this is not a cube farm, it's even tall.

Note the mixed gender workforce, women and men working together in this office. Diversity is critical to informing strong design decisions and the more perspectives you can include in your design the more clearly you'll understand it, the better it will be. How many good security designs have been created by one person? Zero. RSA, three guys, one algorithm.

Then, of course, look, the drawings over here, they're big. You can see he's in this nice, big French curve. When this drawing is done, it's going to be visible, it's going to be clear, it's matching the scale is the thing that they're doing. This photo says to me the design is good, the design is worthwhile, and design is a discipline and it's also an art form and that's where I hope to see us get to as an industry.

Questions and Answers

Moderator: One of the things that Michael mentioned in his talk was this notion of an anti-persona, like somebody came up to talk about a misuse case. Have you seen that sort of applied? What are your thoughts about those patterns? You mentioned adversaries a little bit, do you feel like they're the same or?

Gilpin: The adversary really here is our own cleverness, it's like the adversary of the X-15 was nature itself, pushing the frontier hard enough and the frontier pushes back. That's what's happening in these cases.

Moderator: Understanding the boundaries and what are you pushing up against?

Gilpin: Yes, your limitations, that's right.

Participant 1: Great talk, thank you. About the breach analysis, how you know that those are the causes and not another? How do you establish a technique to do a breach analysis?

Gilpin: Well, these two particular examples that I talk about, I credit the organizations behind them for being forthcoming about what happened. This is a tired analogy, but it's like an iceberg, for the two examples that you see, how many more are that are either not reported, I guess they're probably mostly being reported right now, but there's no requirement that you have to divulge this much information about what went on. It's much more comfortable to probably make a quick announcement, ideally when something else important is happening so people don't notice and then hope that it's going to sink. There was enough information about these breaches that I think the things that I said here hopefully will stand up.

Moderator: Worth noting maybe that they weren't breaches, they were sort of flaws on it, so like they were breaches as well, but I think the conversation was more about the design mistakes there.

Gilpin: Right.

Moderator: More design analysis techniques than breach techniques.

Gilpin: Yes, it's hard to know exactly what word to use. They get bucketed as breaches but ...

Moderator: Thank you for the presentation, and I found very interesting that you used the avionic industry as example for security. We have a lot of talks about Agile and the principles of Agile are build something small and grow it gradually. It's not something that fits very well in the avionic and there are many talks in Agile how can we make sure that we have quality, we don't build something fast but also that doesn't work, but that has quality in there. I use always the avionic industry as an example, look not in some industries we don't have these problems because they are not Agile. They cannot be like that, they cannot get something small in there and then the pilot dies.

Also, many diagrams that you showed, how we do design and then coding, and then we have design, and design reviews, they look a bit Waterfallish into my eyes. Here comes the question. Can security be really Agile? Because security for me looks like a Boolean value, you either have it or you don't because if you have small flaws in there, it means that you don't have security. Can security be really Agile or do we have be always a little bit Waterfallish and a bit more rigid in the way we develop?

Gilpin: I had some material about this. The way that I've made this more compatible with Agile when I'm doing things like this, is essentially to be continually shipping versions of the design and prototype code that go along with it because some of the things that work best for me historically, is showing end-to-end. One of the issues with Waterfall is that you don't close the cycle, so you don't understand the feedback that you're going to get at the end. Going end-to-end you learn quicker about the end state of the thing that you're building.

That doesn't necessarily have to be with product that you're shipping, that can be with prototype code, experiments, and I'll do those kind of prototyping exercises and talk them at standup and present them during sprint reviews and all that kind of stuff. Use the Agile methodology but you're just not actually releasing into production, because like you said, there is a bar that you have to get over before you can do that.

Moderator: I'd like to sort of highlight two things. Kevin did mention in the talk that I think relate here as well, one is the notion of this design artifact knowing and evolving that which I think applies regardless of whether you're iterating every six months or every six hours. You're just about iterating on the design artifact. The other is sort of the chart talking about where do you put in the effort? Maybe when you're dealing with something that is an unknown-unknown high-risk asset and it is a high-level of unknown, then slow down.

Participant 2: You mentioned the importance of having design artifacts which are people can collaborate on and which can live with the code. What are good tools at the moment for doing that?

Gilpin: That's something I personally want to work on. I think that the software that we use is capable of so much more than I think sometimes we're actually getting out of it. We can pretty much make movies in real time now and where is that kind of technology for the problem as important as software design? With that said, I think there's things like whiteboards. My intent is not to be don't use whiteboards, people have collectively implicitly evolved representations of things that they think are helpful to understanding software. Representations like UofL have value, but I think what's missing is bringing those forward into the expectations of fidelity, and dynamic, and collaboration.

We're seeing things like Slack taking a step ahead exactly based on capabilities like that, burying email and Hipshot and some other stuff. Not certainly inventing a new thing that no one ever thought about, just making it feel modern, like yes is what I wanted, this tool to feel like, or it's up to the potential of the devices and technology that we have. With that said, I'd say Google Docs is probably the number one thing that I used over the last five or six years because you can draw and you can write prose and it's very collaborative. Then I went into an environment where we weren't allowed to use it anymore and I felt lost from that.

 

See more presentations with transcripts

 

Recorded at:

Jun 25, 2019

BT