BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Panel: Ethics in Software Engineering

Panel: Ethics in Software Engineering

Bookmarks
52:25

Summary

The panelists explore emerging ethical issues related to software engineering, as well as how they can potentially be addressed. The panelists represent a diverse set of perspectives - from professional society to industry to academics.

Bio

Ayana Miller is a Privacy & Data Protection Advisor at Pinterest. Bruce DeBruhl is an Assistant Professor at Cal Poly San Luis Obispo in the Computer Science and Software Engineering Department. Theo Schlossnagle is the founder and CEO at Circonus. Megan Cristina is Chief Privacy Officer at Slack.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Moderator: The reason that we put this panel together is to have a discussion, not just among us, but also with you about emerging ethical issues that we're seeing in the software community. On this panel, we have people that have been working on these issues for last decade or so. We will be able to talk about current emerging ethical issues as it portends to the solution. I will also save a healthy amount of time for questions from the audience as well. Maybe we can start with Ayana [Miller] and go down for the panelists to introduce themselves and maybe also talk about how you come into working around ethics and tech.

Miller: My name is Ayana Miller, I am a Privacy and Data Governance Technical Program Manager at Pinterest. Before that, I was a privacy engineer at Snapchat and have worked with FTC and Facebook as well and started my career in system engineering thinking about systems design. Over the decade I come into this from a privacy perspective looking at the way developers build systems and think about privacy by design or not and how to build those now with new legislation coming [inaudible 00:01:48] GDPR, CCPA and other talk of legislation.

There's also company commitments to privacy and policymakers and regulators that they make so I think that's the important lens I look through things. It's not just what the industry is saying but also what commitments a company wants. I think that's what [inauible 00:02:08] from my perspective drives your ethical decision-making.

Schlossnagle: I'm Theo Schlossnagle, serial entrepreneur, currently the CTO of Circonus. I've been a participant in the ACM for many years. I'm a distinguished member there, also one of the elected members at large, sit on the practitioners broad in the ACM Queue. Why do I care about ethics? I think it's really selfish actually. I'm getting older and all the technology kids are creating. I want to live in the world that enables. I'm a big proponent of human rights and see them violated. I'd say there hasn't been a full 10 years, I think about six years ago I became actively interested in promoting ethical considerations and thinking and software engineering because of that.

Cristina: My name is Megan Cristina, I'm the Chief Privacy Officer at Slack. My team at Slack is responsible for all the privacy program and data governance initiatives. For example, when GDPR was coming into effect we were leading the way with all the policies, processes, product changes that we had to do to be in compliance and help our customers. Prior to Slack though I spent the last 15 or so years in trust and safety work, so I was at Twitter and before that at Yahoo doing also policy, privacy, and safety work.

This is interesting to me I think because after decades of watching projects be built and being on the team that's responsible for looking at all of the ways that they are used that are not the intended ways and seeing the worst of the internet, what gets me excited is making the world a little bit of a better place by finding solutions that help make a better experience for all of the users and customers that we have.

These are some of the hardest problems to solve and that's what I love about them. You never get to a point where you're just, "Ok, we solved it all. There's no more abuse on the internet." You solve one thing a little bit and then it just becomes harder and more complicated, but that's what I find interesting about it.

DeBruhl: I'm Bruce DeBruhl. I'm an assistant professor of computer science, software engineering, and computer engineering down at Cal Poly. My focus is largely in the areas of security and privacy which are quite hard to separate from ethics. In my intro to security classes as well as my privacy engineering class we have a large component of the class that is focused on how do you approach these problems while considering the humans and the ethics around the humans. Beyond that, I'm also interested in actively researching in collaborative autonomous vehicles and ethical implications of those, and all the fun things that can go wrong in that space.

Top Ethical Issues in Software Space

Moderator: I think now you can see why I am excited to have all these folks here because this is actually perspective from academics, industry expert, and professional society as well. I would love each of your thoughts on what are some of the top ethical issues that the software space are facing right now? I think you referenced some privacy as the prime example, but are there others that you care or keeping you up at night?

DeBruhl: I think with privacy, in particular, I'm interested in the question and challenge of allowing people to make reasoned decisions about their data. I think a lot of the architectures and the digital ecosystem we're living in makes it very, very difficult for an everyday person to understand what's going on. We even see this amongst iGen college students when we've done surveys looking at what do they actually understand about their privacy settings and does that match how they're actually acting?

No, they have no clue how it works even though they've been raised with this technology from two years old in a lot of cases. I think from the privacy standpoint there's a lot of open user interface questions still to be solved. I also think that ethics in SMArtX where it's cities, cars, transportation, airplanes, of course, is going to have a lot of interesting and challenging problems coming up.

Cristina: I agree with that completely, especially the privacy issues and I think I face that with all the different products at Yahoo through Twitter and now, at Slack. In addition, I think something that is less relevant in my current role but is still really top of mind for me is the issue with misinformation and with manipulated media and the effect it's having on our democracy, on the world, and how platforms really manage that at scale. Having worked really closely with a lot of the people at all of the major platforms who are dealing with this, we have some of very intelligent, very passionate, very well-meaning people trying to solve these problems, but they are hard.

I think that's one of the biggest issues that I think about and one of the hardest. I don't have the solution for it but I'm glad that there are a lot of people focused on it right now. I was just reading a blog post from Del at Twitter about their new policy around manipulated media and how they're coming up with creative ways to flag it for people at least which is a big leap from where we were even just a year ago. It may not be the right solution but there are people trying to solve these problems. Beggars need some peace to know that there are bright minds out there on this.

Schlossnagle: I also think privacy is a critical issue but I think it's pretty covered on the rest of the panel. I think the general lackadaisical attitude of systems engineers and software developers thinking it's not their purview or responsibility to tackle the issue. I think that manifests significantly in the craze around machine learning in AI. They're all built on data so it's all systemically biased data however it is biased. We're building reinforcement systems that are effectively systems of oppression and there's just a lot of, "Yes, we'll get to that," or, "We'll fix that," or, "It's not a big deal," and all of it is fundamentally a very big deal.

Miller: Those are all good ones. Something that's an oldie but I think is still a goodie for me, and it keeps me on my toes, is thinking about how the technologies that we do adapt and incorporate into our products and services and actually have adverse impacts and unintended consequences. One of those that we all, especially, in the privacy and security community love is intent encryption for things like message data and we're seeing now with just the proliferation of companies thinking about privacy, they go towards that as a solution.

While it's great for people who have good intentions and care about privacy in our good citizens and consumers, for things like safety it becomes an issue. I think about children and their participation in social media and also the role the government plays, "Should we build those back doors to allow some actors to be able to get access to information?" Just keeping that balance between creating good privacy solutions but things that are also too good because then you can weed out the bad guys. Yes, bad guys are able to go undetected as well so it's hard to keep that balance.

Prioritizing Ethical Issues

Moderator: Given that there are quite a few issues out there in terms of ethical or ethics, how do you prioritize them? I know that privacy right now is top of mind for many companies because of regulations that are coming down, like GDPR is here already, CCPA is coming. When you had to juggle your resources for privacy solution versus child safely solution versus misinformation solutions, what are some of your approaches around prioritization maybe for the workplace or academics? For example, what do you teach first? I would love your thoughts on that as well.

DeBruhl: On the academic side it's really an interesting challenge in that by the time that I get junior and senior-level computing majors in my elective courses, they've already been in a digital ecosystem and have particular views that's been based on this digital ecosystem. One of the things that we've been focusing on more lately and one of the things that will be really interesting to explore more – I'm working with a couple of students on – is how do you actually get a priority of digital hygiene, digital citizenship to the elementary and early age groups.

We're actually actively exploring particularly that digital safety piece for children, so a bit of a roundabout way to answer that question is trying to actually increase safety with children by creating better curriculum for the children. That's not to give it out to the tech companies that need to come up with good solutions to protect children, but I think it is important for education starting much younger than we're currently doing around these issues.

Cristina: Prioritization in the workplace I would say it depends. When there's a law coming into effect like GDPR, it forces prioritization. We knew by this date we had to be compliant, we had to make sure our customers can be compliant so, therefore, it is prioritized. CPPA is the same. When it comes to things that aren't necessarily mandated, that are just the right thing to do, then it comes down to judgment. Having seen a lot of these things things can go wrong with any software, you know the things that you can put in place early on that will scale. It may not be a big issue now but it will be eventually. At Slack, three years ago, we were moving quickly. You could get tools built in two weeks. Now, we're at a much more formal prioritization process and it comes down to building a case based on experience, honestly, and knowing what's going to happen if we don't do X, Y, or Z.

Schlossnagle: I'll take a softer approach to all of that. Given the velocity of computing in emerging fields, we really don't know what the hell we're doing at all. Nobody really knows what's going on. The landscape is going to be so different in five years that what we think of today for policies and procedures. They'll mis-apply badly. They'll be updated, but it's always a catch-up game.

I think there's a tremendous amount of investment in tandem with educating people about digital hygiene at a young age, educating engineers in the concept that they should consider how the things they build are going to impact people. It's not just the lines of code you write, it's the entire product that you're building and the go-to-market strategy around that product, who it's designed to service, who it's designed to disservice. All of those things are really important and the thought process.

It's been a long time where software engineers have been encouraged to meet the stakeholders. Why would you build something in a vacuum with a spec sheet? No, you need to talk to the customer. This is the same part. It's like I'm building software and it's going to affect human lives, maybe I should look at those human lives. "Is what I am building going to improve the world tomorrow?" I don't think enough people reflect on that so I think getting people on the right road, marching in the right direction allows us to do micro corrections over time as opposed to some radical correction that needs to happen.

Cristina: Just to add onto that, probably 10 years ago, I used to instruct my teams that product PMs and engineers, their job was to think about all the great ways their product could be used and your job is to come in and explain to them all of the ways that it will be abused or off the dark sites. It was more like this consultancy engagement.

Now, we've shifted and it's more of a partnership where it's not like we don't come in late and say, "Here's all the bad things," we start early and educate the team so that we don't have to tell them every new product and how it'll be abused, they just know. It's more effort in the beginning and you have to build all those relationships but it works so much better.

Miller: There's definitely the regulation angle. It's helping a lot, helping to set priority and get funding. In absence of that, I think we're also seeing more consent decrees and just regulatory inquiry and enforcement of companies' own privacy policies and statements they make. Those definitely help. My other angle coming into this is looking a the consumer view and thinking what group of consumers are most interested in privacy. If you can start young, that's great. My thinking has taken me into a world of high profile people and people who share a lot so protecting information that they don't want to share.

I've seen in my experience how tech companies treat VIPs. They have different sets of tools and experiences because their experience on the platforms are different. Those experiences eventually roll out to a broader base and so if we can extend that to the whole entire market in thinking about these ethical questions, "How do you treat your VIP customers, how do you treat your VIP clients? Can we make them advocates for services that would eventually get proliferated down to all users to help them also appreciate and understand the value of their data and their experiences on these platforms?"

Technical Control

Moderator: I think many of you have similar experience to what I'm about to share. For the audience, don't judge me too much but many years ago when I first started in this area, one of the first control I designed to protect data was I wrote up a policy that I made all our engineers read and sign to promise that they would not mistreat our data. At that time, that particular control was deemed acceptable. I can say with a fair amount of certainty that control would not scale now. That'll lead me to my question to the panel which is, technical solution has been seen more company receptive to that now. We definitely see a shift from those manual policy-based control to a technical solution that will scale. Can you share some of your experience in adapting what technical controls at your companies, at your work?

Schlossnagle: I'll be the anti-speaker on that one before and then everybody can talk about their technical solutions. Technical solutions all have flaws. There's always a way around it. People taking your private data and sharing it with the world for a victim is not much different than people taking your private data and being compromised and having it shared with the world. Your data's out there. I'm a victim of the Experian hack. I'm sure you can find my social security number online. I didn't consent to any of that collection really. That's a huge problem.

At the end of the day, you can have all the policies and procedures you want, but if you don't have regulatory enforcement of those, if there are not consequences for that then nothing really matters. If you look at someone who builds a bridge and designs that, you need a practicing engineering license to do it first of all, and if you screw up you get your practicing engineering license revoked and then you can't do that job anymore and you'll get to choose a different job, have another career.

We don't have anything remotely like that in computing but I will tell you, when lots of people start dying because of our decisions, the government is going to set up something that looks very much like that on this industry and we will reel because of it and we will have no choice. I have to say that it's going to hurt a lot but I would support that. If you have certain responsibilities and if we don't collectively stand up and provide on those responsibilities then the government is going to put regulations in place that are going to make us do that. As you would imagine, any time the government has ever done something you're like, "That was perfect," never was said. There'll be a lot of consequences. I think regulation and enforcement of regulation is a much better strategy than technical solutions. I think that they're necessary as well.

DeBruhl: I'll join the counter-example case. I think that there's been a lot of siloing in the academic side of things where we tell our engineers that it's really important that you take your computing science classes and your software engineering classes seriously. Let those ethics classes worry about those later. One of the things that I'm really actively doing is trying to introduce ethics in the security and privacy courses for our students so you get both sides of that equation at the same time. I think we need to make sure to train our technical personnel with ethics and social consciousness more.

Cristina: I'll offer a technical solution that I think is effective. Ten years ago at the Tech Coalition, Microsoft specifically, through the Tech Coalition developed photo DNA which is technology that detects child sexual abuse material. It's not perfect but about 10 years ago it was pretty groundbreaking and it's very effective at finding CSAM. Thorn is a nonprofit that is taking it a step further and they're going to do a lot more with that I think, but it is a technical solution that is finding things that we never would have found otherwise that is protecting children and saving lives.

Miller: My example will be more specific to the industry in my company. What I've seen actually a theme across a couple of companies I've been at is building in-house versus looking at vendor solutions. I think that goes across the board with partnering with security on looking at systems for authentication, authorization, how can you create a system that serves as a policy server so you can have all of your engineers and teams that have certain jobs write their policy for who can access that and have a central server to have that information. Those type of solutions we're seeing successful. Also, building in-house tools for leasing data out, so even if you use a service like GCS or AWS for your data, building a layer on top to be able to lease data out to engineers on a case by case basis with expirations.

There's a TTL for how long they can use the data and you take back that. They're actually just leasing it out as opposed to giving them full access all the time to all the data. Especially, seeing different environments where some engineers have access to the correction environment versus the development environment and how you can separate those out completely, depending on the maturity of the company those type of solutions are better hired by people who built the systems. Some of them are still there, but people who are already working on it as opposed to necessarily like a vendor solution that you buy at a shop and try to manipulate.

Getting the Company on Board With Technical Solutions

Moderator: Just to follow up on that a little bit with Megan [Cristina] and Ayana [Miller], when you were thinking of these technical solutions, for example, photo DNA or the data management solutions you used, was it easy for you to get the company on board with that when you captured them and proposed it? Did you actually want this built as a product, as a service versus as a policy that is written somewhere by the legal time or by a policy professional?

Cristina: I can't speak for Twitter because they had it long before I joined but I can speak for Yahoo and for Slack and it was easy. I mean, when you're talking about children being abused, nobody's going to push back on that. It's harder to get other technical solutions in place. For example, vendors that help with our GDPR compliance. That's a little bit more challenging because technically we could do it, just not as well in-house. Then, you just have to make the business case a little bit stronger.

Miller: I've gotten creative with how I approach it so we'll say yes up front that it's getting easier. I have an advantage of working at companies where our CEO will offer a Q&A at different parts of the year. At the better of those companies it will be every week and so, being able to on a Friday go up if there's an open question and say, "I work on privacy engineering. Here's the gaps. No one's addressing these. We don't want to be in the news. How can you help me get resources?" I've been that person. It's been effective so that's what I've used to get support.

We recently created a privacy and data governance working group to be able to get people in legal and the chief architect as a sponsor. I have people in procurement IT so as we think about solutions it's everybody on board because, oftentimes, those people were all talking to me but they weren't talking to each other. That's what's really going to help us get to the point where we're not just talking, but acting, and that's what I try to optimize for.

Schlossnagle: I think that's awesome. How many people work at a company where you have an advocate like this? That's more than I thought, five or six. That's super effective, that's like constant reinforced engagement with engineers that this whole thing is important.

Having Technical Solution When Talking to Regulators

Moderator: From also the regulatory perspective, from what I've heard, and I would love to get your thoughts on this too, is regulators, for example, data protection officers have also been getting a little more technical in how they assess companies. I think before audits that were performed more of interview less technical nature, I've heard they bring their own tech team inside company to do technical testing. I'm curious, do you think that technical solution will help when you talk to regulators versus not having them or would it confuse them?

Cristina: I think it depends on the regulator, who you're talking to. I think it can't hurt. It can only help whether how well they understand what you're talking about is going to depend on who you're talking to, but I can't see it hurting.

Schlossnagle: We have a lot of cases. Those tools are automation and efficiencies on process and policies that they can't not support, so you don't tend to build a tool in isolation of an intent. I think regulators really care that the intent is stated the way that it is and they care that you have a mechanism for achieving that. If you've automated that through code and technology, then ideally you've just done that whole thing more efficiently and you always do have the chance of mistraining the systems especially if they're ML-based. Right now you have full set of confidence but, I don't see the tools. I mean, if you led with the tools as an excuse for not needing to do things, it'd be bad, but I don't think anybody would ever do that.

Cristina: the tools are going to create a more audible trail. If you just say, "We have all these policies in place," they're going to ask for the proof like, "How are you implementing it? Let me see that." The tools give you that.

Limitation of Technical Solutions

Moderator: I'm going to maybe start with Theo [Schlossnagle] and Bruce [DeBruhl] for my next question. You both touched on this before, the limitation of technical solutions. We saw many different diverging views on these solutions before. For example, focusing the differential privacy is to research and check to academic to be adopted by normal companies. There are a few of them with the scale and the size of resources to implement them but not all can do so. We have also seen criticism around bias in the actual solution themself. From your perspective, for people who choose to go about creating these technical solution, what should they keep in mind to make sure that this solution is effective and can actually be used by people in the industry?

Schlossnagle: I think that there is no way to avoid bias in algorithms in general. If they are making a judgment on a human being at the end, that algorithm either has the coder's bias codified in it or it has input data codified in it or it has both, most likely. I think the biggest problem in our industry is that we're looking at this as computer scientists instead of ethicists. Human rights isn't an optimization problem and we're looking at it as an optimization problem. When you try to decide who should get credit and who shouldn't get credit, human beings in our society need credit. It's not an optimization problem to reduce risk on credit providers as much as that ignores the entire human societal aspect of the need of the community to have access to credit. We're treating the entire problem space as an optimization problem and it's dehumanizing. That's the fundamental problem with that.

I don't have a solution to it but the one thing I would beg is that as a part of the software development lifecycle, you constantly ask the question, "What are the implications of this? What are the consequences? What are the ethical outcomes of this?" You constantly ask that question through the iterative process so that when the time comes and something bad happens, because it will, you're not blindsided by that at all. You've been asking that question all along and hopefully, you've caught it a little earlier.

DeBruhl: To echo that, continually looking at abuse cases and the ways things can go wrong is so important in the design of these tools. I feel like there's oftentimes where you'll try to design a tool and you'll ignore side cases that will leak the data in different exciting ways. Also, I would say try to avoid technical silver bullets or tools that can solve all problems. Those generally don't exist. Going back to that encryption example earlier, yes, it's great for privacy, also great for child predators. There's a tradeoff.

Cristina: My team doesn't build AI but I can say from just an enforcement perspective, when I worked more on content moderation, I would intentionally hire very extremely different views on my team. Diversity, it was a background in experience but also in the way they look at content. People who always want to leave everything up and people who always want to leave everything down. Then, we would have these heated debates and somewhere in the middle was the right answer.

Schlossnagle: Content fight club.

Cristina: Yes, it was great. You hear the most extreme views and things that you may not have thought of because you're bringing your own biases into this even though you try your best to leave them aside. I think it just creates the best policies and the best moderation.

Miller: I think there's some interesting stuff going on around the federated machine learning model. Again, no silver bullets but where it can help, there's some interesting stuff. To be able to just process information on someone's device and not share their personal information back up to the cloud, but to be able to still generate value and insights, that can help improve the product. I think there's a lot of opportunity there. I'm excited to see what the industry comes up with.

Schlossnagle: Playing off that a little bit, I'm concerned where the regulation comes in. I think Goldman just released a statement that they cannot possibly have gender bias in their hiring process because they don't have gender or marital affiliation as questions. The question there is, "Are you that incompetent that you can't get nearly 100% correlated accuracy on guessing that from now the data that you have or are you lying?" Neither of those things are very good.

I think consumers have a horrible idea of what their data hygiene is. Especially if you don't work in data science, machine learning, or inference or anything like that, you have an under-appreciation of how anonymized data is almost never anonymized. There's a great case, Latanya Sweeney I think did it out of Harvard, where she looked at some anonymized medical records and was able to identify six specific people in a state that had these issues.

DeBruhl: The lieutenant governor of the state as one of them.

Schlossnagle: Yes. I mean, when you take someone's data, one, I think there needs to be liability for leaking it regardless of how it gets leaked but I think there's a misconception about engineers. If I just remove the name and obscure five digits of the social security number "No one will know who this is." It is patently false and it's amazing how much data you can erase from that and still recover. It's dangerous.

Hitting Back Hackers

Participant 1: I work as a security software engineer for Pluralsight. This is on adjacency. It says in the title, we're talking about things software engineering-related topics. We talk a lot about defense and at some point, you can only defend so much. Things are going to happen. There are really smart people that are trying to get access to lots of data. I'm curious of your thoughts on red teaming and actually going out and hitting back at these people that are not just hacking you, they're hacking everybody. They're leaking all of this information. They're just bad actors. What do you think about that?

Schlossnagle: I would say that from a philosophical point of view, we all live in a society that's governed by rules and we've delegated a use of violence to a government. Retaliation is not for private citizens to do. You can't go beat somebody up because they stole your stuff, but in certain states, apparently you can shoot them. That concept that there needs to be due process for that is serious.

I realize that it's really hard when you're fighting a ghost most of the time. These things don't have names, they're not inside the jurisdictional boundaries. I think it's a slippery slope, and a real one, where you're defending yourself against some rogue party and that rogue party turns out to be some U.S. citizen or U.S. nation-state and that's probably not who you want to pick a fight with. I think that we should beef up our national capabilities of doing that and enforce law enforcement around that and our capabilities.

DeBruhl: I'll also argue that it's really good to hire red team engineers to test your own company. There's two very different questions there. Hiring people to come in and actually test your own company whether it's specialist or in-house is really important. I will say that yes, the actual hacking back is probably left best to the nation-state actors that have jurisdiction on those type of things.

Miller: I would agree. I'm actually placing personal bets on the hacker-self method. I mentioned that VIPs are always top of mind for me. I actually have a side business where I do just that for high profile individuals. I think there's a big market for people who want to know what their vulnerability is. There's this feeling of just defeat. Your information is already out there, what do you do now? I think there's something about seeing a report of it the same way you get your 23andMe DNA what is your digital DNA, seeing it. Current services, personally for me they're lacking. I'm a member of LifeLock but it's underwhelming when it's like, "Yes, we found your data on the dark web again." Really being able to see that report of "You're using this password in different ways. Don't think just because you added the name of the company at the end that that's me. I know you're all out there." Someone doing that and being able to [inaudible 00:37:21] someone and show them that I've seen it be very valuable in changing behaviors. I think there's probably opportunity in the company context for that as well with their consent.

What Engineers Can Do

Participant 2: I was curious about your views on what the individual engineers can do; we touched on it. I'm a general-purpose software engineer, I'm not a security engineer, I just build regular software. I'm at this panel so I care about this stuff. I don't think I've ever [inaudible 00:38:09] stories like as a user who's concerned about privacy I've never seen that in duo, never seen that in task. I questioned exactly what is the role of individual engineers? If I can make an analogy, another topic engineers care about that no one else does is performance.

We care about making our code run and if we want to sell that to our product managers, the way we do that, we tie it to it's like, "We're seeing that it's going to have this effect on the product and we convert it to dollars, that's great." Then, they can say, "That sounds great," but they might say, "It's actually not worth it. It's not enough of an improvement." If you're someone who cares about privacy, as an engineer I feel like there's only so far you can go with that we don't even have that framework to be able to say thinking about privacy ties to a product in this way, there's not even a framework to make that pitch, and even if we did, we'd still get the same thing where like, "Ok, we'll make [inaudible 00:39:10] evaluation." Now, it's not worth it enough.

I agree that as engineers we need to be ethical and think about this but I also wonder are we placing too much effort on what individuals can do, whereas, I guarantee you if companies were liable for that it would actually mean something. Because that doesn't exist. You could have been the most ethical engineer at Experian and made the case be like, "You can't have your passwords like this, they're going to get hacked," and they would just shut you off.

DeBruhl: I will say all software engineers are security engineers at some point. I think at the lowest hanging fruit looking up the most common vulnerability both SANS and OWASP and various authors have. These are 10 most common things. Familiarizing yourself to avoid the really stupid low-hanging fruit because if you look at something like Experian where they just forgot to update their Apache Struts framework, that should have been avoided. There's no reason that ever happened. Familiarizing yourself with at least that level of understanding around security and privacy I think is really the baseline.

Miller: I've seen a couple of experiences with this. I run a privacy and security champions meeting monthly. Attendance is low; even when you open up privacy and security for the company, finding out who those engineers are who care, I had the same engineers, SREs, and [inaudible 00:40:51] who ping me about things and I know that they're looking in a stack. I put out a newsletter every month and I say, "If you see bugs or you see something that's questionable, contact me." I'm working on that but, like I said, that's not well-attended.

What I have seen be successful is building it into the product discussions. We had a team that wanted to start ingesting new data and by law, by a legal agreement they couldn't have it access to certain other people in the company, only for that team in that use case. Our architecture doesn't support that and so what we had to do was get together this governance group and figure out how do we unblock this. Part of that was finding out what's going to be the ROI for setting this up. Do we want to take time and build it out? No, we don't have a year or two to try to build it and make our systems right so let's set up a clean room, let's use Redshift solution or whatever else is possible, let's look at the market so we can unlock this and get value.

Then, once we have that value then we can make the case that we should present more access controls and environments where we can stand it up very quickly, shut it down if we need to. For me, it was the business that was driving that use case, not privacy. I'm going to ride it. I'm going to ride it all the way to the end so that I can make sure that the business is able to function because, at the end, it is about compliance, but I want the company to be successful too.

Cristina: Just to add on a little bit, I would say we spend a lot of time educating engineers and product managers about privacy and data protection but I don't expect them to fight the battles for us. We use Slack obviously heavily and so there are many channels where they can flag things for us and then we take it from there and figure out the solutions and help build the case in partnership with them. To your point it's hard you're just one engineer who has an idea about something that could be abused or something that could be better to go make the case to the entire company.

I would also say it also depends on the company. At Slack, protecting privacy and data is paramount. If people don't trust us with their data they're not going to use Slack so making that business case is a lot easier. Privacy is very easy to sell from across the board because it is so important to everything we do. It's going to be a lot harder at other companies, maybe consumer-focused companies where it's important but just not as fundamental to everything that they do.

Schlossnagle: I'll add a different spin on that, that all of the technical approaches are necessary, but the difference between a software engineer and a professional software engineer is the word 'professional' and professions come with ethics. If you look at any profession out there, there is a code of ethics. I would argue that the best code of ethics for the profession is probably the ACM's code of ethics. If you're not in the ACM, you probably should be.

If you are in the ACM whether you know it or not, you agreed to the code of ethics. If you work with other software engineers that are also ACM members you can reinforce that and also realize you're working with people from the same background. Everybody's read that same code of ethics whether they agree with all of it or not, they're working off of that same slate. I think that's really valuable so everyone has the same shared context on how to approach those problems.

Privacy Effectiveness

Participant 3: When I was at the actual practice government regulations, and actual privacy effectiveness, in my experience, I've seen a lot of orthogonality there. For instance, I previously worked with software trying to make it compliant with the EU's old cookie regulations. You know the other things, you find a lot of still defying regulations that you really with a lot of situational context awareness of what you're doing make you feel like no one's privacy is really being protected by this but I'm really hindered from doing my job in an effective away. Or, either preventing from doing a job or create large amounts of confusion and ambiguity to think, "Ok, think I'm in compliance but now everybody's scared about this."

On the other hand, you walk away from all that stuff and then say all the stuff we are allowed to get away with. I'm like, "They give us access to this and we're still in full compliance with everything?" I can't believe we're actually allowed to access and view and do this while still being in full compliance. Having talked to a lot of people working in the GDPR space that is thinking here very loudly from the people actually writing that code. I'm afraid that as public awareness grows around the topic that that orthogonality might actually increase rather than decrease. One might say, "We'll just make the regulations more gooder and less badder." Is there anything tangible that you all might suggest as ways of reducing that orthogonality or any other take you have on that?

Schlossnagle: Actively participate in policy discussions. The ACM has a policy committee that tries to influence policy in Washington. If you're interested in that and you want to put your time behind that, there are definitely places where you can invest that to get return. I think it's a super-serious issue and I just think that all of these things happen because of explosions. The cookie one is a great example of a very small explosion that caused some rash reaction.

When the explosion is big enough, we will see something very seismic, buckle up, I think it's going to get a lot worse. As I tell everybody, if we hold ourselves accountable, this wouldn't be a problem, but we don't so someone will eventually do that. I will say that I think it lacks correlation.

DeBruhl: I think increasing education for technologists on policy, we're currently looking at that more in exploring partnerships with our poly-psy peers on actually getting both groups along with the ethicist in one room to have more meaningful discussions. Starting young at the college level and hoping that plays out through their career.

Schlossnagle: I also think lack of computer science education in policymakers is an enormous problem and the only way to fix that is to get computer science people to become policymakers. This is one of the rare places where we should be putting computer science into something as opposed to putting that domain into computer science. I usually advocate that computer science shouldn't even exist. Everyone in their own domain, they're biologists, they should learn to compute, and then they can do biology because they know biology better.

Instead, we have all of these gallivanting technologists that think they can solve all the world's problems and don't know jack shit about the domain they're going to try to improve. But in this particular case, when you're trying to regulate how software is written and how policy is applied, we really need people that understand computers to write those policies.

Miller: It's been very scary for me. When I started my career I started in DC. I was public policy background thinking I was going to do that and got into this world of privacy. I worked for the FTC, a company of attorneys for the most part, and any type of chief technology officer they had was at a senior level. They may do some leading and investigations but they're not actually doing the auditing.

The FTC, if they set up a consent decree, what they do for a company is they let the company choose the auditor. I've been through that process and even that is fundamentally broken because there's just this weird relationship between somebody who you're paying to audit you and they're going to find things but they also have to give that report to the government. It's just a very weird relationship so I plus one that we need more people who have analytical skills and people with computer science backgrounds and people who have experienced in tech back in the government.

Then just one other point, because I'd like to talk about, is money. I think that matters that's what made me move from DC to go to Facebook to go to Snap, go to Pinterest. They understand the value of my experiences and I don't know that necessarily I hear that same message coming from the government and their hiring, in their practices. They're not offering free food, it's a hard tradeoff. It's like they need to start being creative about how they attract talent. They're competing with engineers who have pretty great lives in San Francisco and across the Bay so they need to just get creative about that. I don't know how they're going to do that but they should do that.

Schlossnagle: I would say one of the seismic things that I predict – no one will be happy about this – is that the reason that we are all paid so much for what we do is because we extract more value from consumers than we give them most of the time. We're taking privacy data and we're bundling that as a product. We're taking a little bit of human rights and eroding them and then selling them. I don't know how long that's going to last but when those sorts of business models evaporate there will be a right-sizing of salaries.

I don't know if that's going to happen in 5 years or 15 years but when we're now paid that what teachers are paid – God I hope the price on that goes up – when that happens I think that it will be very easy to go into policy. Now you don't have this outsized incentive to go "Capitalism 100%," because all of the salaries wind up. Right now we're incentivized to not go into public service. It's a mess.

Top Techical Solutions

Moderator: I think we are running out of time and have to wrap this up, but before I do, just a quick round the table, name the top technical solution that you're most excited about in terms of helping with regulation. You don't have to explain why, just name it.

DeBruhl: I'm quite interested in modeling of physical systems and how physics can be used to verify security and privacy.

Cristina: The technology that Thorn's developing to help platforms to combat CSAM.

Schlossnagle: Not Blockchain.

Miller: He mentioned it earlier but federated machine learning.

 

See more presentations with transcripts

 

Recorded at:

Feb 19, 2020

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT