BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Consensual Software: How to Prioritize User Safety

Consensual Software: How to Prioritize User Safety

Bookmarks

Key Takeaways

  • Consensual software means we should get an explicit “yes” from our users in order to interact with them or their data.
  • Many well-intentioned features can be abused to harass users. Building software with consent in mind helps prevent these loopholes before they happen.
  • GitHub’s Community & Safety team builds consensual software and reviews other teams’ code to close abuse vectors before they get deployed.
  • Use a checklist of common abuse vectors to close potential loopholes during the product specification phase before the features are released to the public.

“Okay Google, what is the Whopper burger?”

This one phrase triggered thousands of Google Home devices and Android phones across America to search for the Wikipedia entry for a Whopper burger. This caused an immediate uproar because it did one thing no one expected: violate consumers’ consent to use their device.

In the age of smart devices, IoT, and ever invasive advertisement practices, it becomes imperative that we build explicit consent into every feature that we build. What will happen if the next television ad asks Alexa to purchase twenty rolls of a certain brand of toilet paper every time the ad plays? What if Google Home plays ads for medication to a user who hasn’t told the rest of the family about their condition? What if Alexa outs a LGBTQ user to their family and puts them in danger? Or endangers a person trying to leave their abusive spouse by suggesting ads for self-defense classes and survival supplies based on their browser history?

Now that ISPs are able to sell consumer browser history, user privacy is more important than ever.

The easiest way to protect user privacy is to give users the information they need to make informed, consensual decisions to use our products and to not assume passive, implicit consent.

This article will cover how consensual software will help address online harassment and abuse vectors before they become PR nightmares. It will also go over some features the GitHub Community & Safety team has built, how we review features from other teams, and the cost of ignoring online harassment on your platform.

What is consent?

What does consent mean? Well, consent is as simple as tea and applies to many facets of life including technology. Consensual software means we should get an explicit “yes” from our users in order to interact with them or their data.

By getting an explicit “yes” from users, we ensure that the features we build aren’t used to annoy, harass, or endanger people. If we assume that a user has implicitly consented to using a feature, then we create vulnerabilities and loopholes that can be exploited to harass others.The Amazon Echo Look is a perfect example of how a well-meaning idea could go horribly wrong.

Let’s say Alice purchases an Echo Look because she wants to take outfit of the day photos for her blog. She places the Echo Look in her bedroom and tells the device to take a photo of her outfit.

The problem, as with most webcams, is that the person telling the device to take a photo is not always the person who is in the photo. So while Alice may be explicitly consenting to have her photo taken when she’s fully dressed, it would be very easy for another person to trigger the camera to take a photo of her without her consent by simply saying “Alexa, take a photo”.

Because Amazon Echos, Google Homes, and a host of other IoT devices have a hot mic that’s always listening, adding a hot camera to the mix causes infinite more problems that can easily be hijacked to target vulnerable people for humiliation, blackmail, or revenge.

Adding an additional headache to the mess, photos taken by an Echo Look will not be covered by copyright law. The US Copyright Office states “the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author”. That means that no one owns the photos taken by an Echo Look. This makes it impossible to use a DMCA takedown, one of the few ways to effectively take down non-consensual photos off a revenge porn site or forum. While 36 states in the US now have revenge porn laws, it’s often traumatizing, time-consuming, and costly to prosecute someone and doesn’t solve the actual problem.

The actual problem here is that we as product designers, engineers, and technologists need to start building products with the user’s explicit consent in mind during the product ideation phase. We need to make sure that Alice is always able to consent to having her photo taken by an Echo Look, each and every time.

Consensual software at GitHub

As an engineer on GitHub’s Community & Safety team, it’s my job to close abuse vectors and build anti-harassment tools to improve collaboration on open source projects. One feature that my team built is Repository Invitations. Previously, a user could add anyone as a collaborator on a project’s repository. We received multiple complaints about one particularly disruptive user who created a repository with a racist, derogatory name and added several prominent users as collaborators. This repository showed up on their timelines as endorsed projects, because the feature assumed implicit consent. Now, users must explicitly consent to be added to a project’s repository: 

This was feature was well-received by many of our users who tweeted their support:

 

Another feature the Community & Safety team implemented was rejecting commit pushes from private emails. Many users over the years have written in to our support team that spammers scraped GitHub for email addresses that were accidentally pushed via the command line. The spammers would then send unsolicited emails to try to recruit or sell something to GitHub users. Using the consensual software model, we decided we wanted to give users the information they needed to make an informed decision. If our users didn’t know they were publishing their private email address via the command line, then we should let them know the risks of doing so and allow them use a noreply email address if they wish.

How to build consensual software

When my team starts to design a feature, we always ask: how could this be used to harm someone? How will someone use this in an unexpected way to annoy, harass, or put another person in danger? All too often, user safety features are bolted on as an afterthought, after people have already been exposed to disruptive behavior. If these questions are asked in the beginning stages of product specification and design, then hundreds of hours of engineering time can be saved by making sure that the product is built safely the first time, instead of fixing it later.

The Community & Safety team is a fairly new team at GitHub and we spent an entire year working on technical debt accrued by other teams. This is a significant portion of time and money spent on something that could have been addressed early.

When user safety features aren’t a priority, users quickly lose trust in your product and leave. A prime example of this is Twitter. Twitter ignored their users’ pleas for years to build better blocking functionalities and safety mechanisms, citing “free speech” as a reason. Now, Twitter has a reputation for fostering white supremacists and hate mobs. This has caused multiple companies, such as Salesforce and Disney, to back out of acquisition deals, and Twitter’s stock prices have tanked.

Consensual software is an easy framework to build into your product ideation phase to improve your product’s safety before it becomes a PR nightmare. At GitHub, the Community & Safety team frequently does feature reviews for other teams, similar to application security reviews. For those who don’t have a dedicated Trust & Safety team, putting together a checklist of common abuse vectors and loopholes to look out for can save a lot of engineering hours, money, and prevent a lot of bad press in the future. Questions you can ask include:

  • Is every user explicitly consenting to use this feature, or are we assuming they want to participate?
  • Is it easy to opt-out of this feature?
  • Is it easy to block a person who is abusing the feature to spam, harass, or threaten others?
  • Are there audit logs to see how users are interacting with your feature? Metrics?
  • Is it easy for your support staff to untangle what happened if an incident occurs?
  • How much personally identifying information is public? How easy is it to redact past or sensitive information? (i.e. a trans person’s deadname, a user’s physical address, private email addresses, AWS keys, etc) Do we really need to store or expose personally identifying information?
  • Are you allowing users to upload images? Are you filtering out porn? Are all users explicitly consenting to receiving uploaded images? Can you solve this problem by using a pre-vetted image integration like GIPHY?
  • Do you allow 0-day accounts the same privileges as a vetted user?
  • How could a stalker ex use this feature to hurt someone?
  • How are your support tickets handled for each new release?

By asking these questions before features and products are built, we can close a majority of abuse vectors before they become a problem.

Never assume that a person wants to interact with your feature. People are creative and will find a way to abuse it. Always get a user’s consent to interact with them or their personal data.

What is the cost of online harassment?

It is no longer cost efficient to ignore online harassment and abuse on a website. Twitter is a prime example of this, having been been rejected by Google, Disney, and Salesforce for acquisition due to its online abuse problem, causing its stock to plunge 19%. Twitter’s active daily user numbers are also dropping steadily and user growth has been suffering.

Users are starting to notice that companies using their personal data without their consent. There have been several prominent lawsuits in the news brought against tech companies over non-consensual user data usage including lawsuits against:

Users also have a higher likelihood of uninstalling apps they think are infringing on their personal data. According to the Pew Research Center, 6 out of 10 users will not install an app they think infringes on their privacy, and 43% will uninstall it for the same reason.

User trust is paramount to maintain user growth. In a world where new apps and services pop up every day, maintaining a steady user base is key to longevity. Making sure that your users are consenting to use your product’s features and interact with their data is an excellent way to stay transparent and genuine.

Conclusion

As software starts to become a more integral part of our daily lives and IoT becomes the norm in every household, we need to start designing software that protects users’ safety before it becomes a problem. Insecure and non-consensual software costs money, decreases user trust and engagement, and adds years of technical debt. It’s no longer cost effective to build software without user consent in mind, and users are voting with their feet and their money. We need to start treating safety features like application security and start building it into our product specification stages before features go out into production. 

By asking for permission first, we can avoid the PR nightmare (and the cost) of asking for forgiveness later.

About the Author

Danielle Leong is an engineer on GitHub's Community & Safety team who loves building tools to help make open source a more welcome and inclusive environment. She is also the founder of Feerless, an app that provides trigger warnings for Netflix users with PTSD. She's passionate about consensual software, inclusivity in tech, mental health awareness, and improving online good citizenship. In her spare time she climbs rocks, rides motorcycles, and dresses up as a T-rex - occasionally all at the same time.

Rate this Article

Adoption
Style

BT