Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Presentations Design Strategies for Building Safer Platforms

Design Strategies for Building Safer Platforms



Kat Fukui talks about the design strategies that the Community & Safety team at GitHub uses to design safer, more consensual features and how to incorporate them into teams’ processes.


Kat Fukui is Product Designer at Github.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.


I am the product designer on the Community and Safety Team. I work really closely with everyone. I'll get into the camp's intro later. But today we're going to talk about Design Strategies for Building Safer Platforms and dive into what that means, and how you can apply these strategies, whether or not you have a designer on your teams.

But let's do a quick intro about me. My aesthetic can be described as casual street wear jiggly puff. So in my spare time, I make some animated pixel art because I'm still a designer. My Twitter, I'm @katfukui, my GitHub, my branding is a little bit off, I'm katmeister because I thought it be easier for coworkers to pronounce my last name, but whatever. As I mentioned I'm the product designer on the Community and Safety Team. A little more about me, I am super into comics mostly indie comics, so if you have [inaudible 00:00:58] just feel free to talk to me after about that. I'm really into full process design. I know I'm a designer at a developer-focused conference, but I have a strong background in coding and using code to make my designs even stronger.

I've just been a fan of Internet communities my entire life. So the earliest one I can think of - I grew up on Neopets, Zynga, LiveJournal, DeviantArt. And those were spaces where I felt like I could be myself without sacrificing my personal safety, even on a young age. And while I don't participate as much in communities anymore, I have a few Discord channels and certainly gaming, part of some Twitter communities. I do find some of those really pierced moments on Neopets is what I really miss.

But enter my next stage where I'm currently at GitHub. Small plug, you can actually make your own octocat, you can do that, so that's pretty cool, right? I've been at GitHub for two-and-a-half years and my entire time was actually dedicated to community and safety. At the time I had worked on other platforms before for community building, particularly one called Stories of Solidarity where we were elevating the voices of precarious laborers, particularly in California who work on farms and work at Walmart or in the hotel industry, that kind of thing.

Empowering people and elevating voices has always been something that was really dear to me. So enter Github where as my nerdy self, it is the biggest platform for developers to connect and collaborate on code together. It wasn't really the original intention for GitHub to be a social network, but here we are. And it really is because a lot of conversation and human interaction happens around building code. You're not building code in a vacuum, you're building for humans by humans. So as it was evolving really rapidly, naturally conversations about how to fix the features that we already built to scale up to accommodate those social actions.

Enter the Community and Safety Team. It's sort of our running joke to keep using this airboat photo. At the time, only a few of us were there on the team, and Danielle keeps photoshopping more members as we keep going on. We're a really amazing diverse group of folks; our lived experiences make our work on community and safety even better. And we have this really long range of responsibilities on our team, ranging from making sure our communities are healthy, doing feature reviews for almost every big feature that goes out to make sure that we're not introducing new abuse vectors, and fixing technical debt. As I mentioned, when GitHub was founded 10 years ago, we weren't necessarily thinking about the ways that code collaboration software could be used to harm other people.

As I mentioned, this is our team and this is our mission. So GitHub's Community Safety Team builds systems that empower people to grow inclusive and healthy communities around the projects, while discouraging behavior that is destructive or threatens personal safety. I usually don't like reading slides word for word, but I think that this is a really concise mission, and it's actually been our mission since day one of the team. It has still remained true.

But regardless if you have a community safety or a trust and safety team at your company, I think building user safety into the foundation of technology is everyone's responsibility. Whether you're a manager, an individual contributor, a designer, researcher, engineer, it is everyone's responsibility. And I say that because really any platform or feature can and will absolutely be abused. Even if you're not primarily building a social network, you'd be surprised how things can be abused. We actually talked about in the panel earlier. Leigh Honeywell actually brought up a really good one where people were sending money through PayPal with hurtful numbers like $14.88, which is a pattern used to harass Jewish people by the Nazi community. So whether or not you consider that when you were building the feature, it will happen.

Oftentimes, every time we build something at GitHub, we always ask the question, how could this feature be used to harm someone? And I think if you can start at this point, in everything that you do to build technology, even if your company hasn't bought into trust and safety yet, this is one way to get started. I want to mention that if users don't feel safe on a platform, they absolutely will leave, and it's important that we take really swift action when dealing with harassment and abuse on your platform. So not only do you need to have the tools, you need to have the resources to act quickly on them.

I mentioned before that you don't need to be a designer to incorporate design strategies or design thinking into your processes. There are ways to get your team collaborating on safer features that ask for consent and are thinking of how to make your communities, particularly to your users, safer. So today we're going to talk a bit about understanding your users with user stories. And this is sort of like a buzzword user persona type thing, but I like to use it in the context of finding these stress cases for your users. We'll talk a bit about that later. Actually implementing safety principles, defining them relative to your business, and actually implementing it via the design. Being able to bridge the design with engineering with acceptance criteria, and scaling all this with documentation; figuring out how to document the work that you're doing, and how to share it with across functions.

I am a designer, so I drew a little blobb to help demonstrate how to implement these strategies. And like I said, aesthetic giggly puff, so I'm going to keep going with that. And I named this thing Blobbo, I'll keep referencing. So Blobbo works on a B2B, blob to blob platform, I had fun with this. And they're working with their team to build a direct messaging feature. It's pretty common on social networks, or really any professional network as well. But there are definitely things that we need to consider here, and we're going to apply our first strategy, and that is understanding with user stories.

Understanding with User Stories

Blobbo, as most designers do, kicks off a feature doing preliminary research. So talk to a few users, it's pretty clear that users want private spaces to chat. Users want to stay on our platform without switching to other chat apps. Context switching is harmful to productivity. Users want to be more social with friends and strangers. It seems like a lot of these motivations for one in direct messaging is collaborative, social, workflow related, etc. But Blobbo notices that none of these motivations really account for potential stressful situations that can come from direct messages and inevitable abuse vectors. They've seen how DMs have been used to harass others on other platforms, and how it affects people who tend to experience that kind of harassment in real life already. So they really want to make sure that this feature gets built correctly.

Enter user stories where we can actually create stress cases to better understand how your users are feeling in particularly scary situations, like trying to escape abuse. Lately, I've been using the term “stress case” instead of “abuse cases”, from "Technically Wrong." I highly recommend checking out this book, it's very relevant to what I'm talking about here where we have to address stressful cases. I personally like using it because I think it humanizes these sorts of situations a lot better than saying an "edge case." Because you know, even if it happens, rarely stressful edge cases have a larger negative impact, and you can quickly lose trust a lot faster, especially if it happens publicly.

I wanted to pull out some stats from an open source survey that one of our data scientists, Frannie, at GitHub conducted last year. So 18% of respondents in open source have personally experienced negative interactions. While only 18% have experienced, a lot of people, 50% have witnessed it between other people. And between people who have experienced and witnessed, 21% of people actually stop contributing to those projects. So it's really clear that there's a direct correlation to even just seeing abuse on a platform that's not being taken care of. I think I mentioned before that they're similar to user personas, but they're focused more on motivation, and the context that's happening outside of the screen such as mental health.

So how exactly do we get started on making it into a story that's relevant to our platforms? So these are the three things that I addressed. What problems are they experiencing? How are they feeling? And what does success look like? Let's draw one for Blobbo. I tend to use an iPad for this, there are definitely other ways, but for me this works out to actually draw a natural an actual journey. This poor blob is trying to escape harassing DMs from an abusive relationship. So what are their problems? Well, it's actually really easy to just keep creating sock puppet accounts. So sock puppet accounts are accounts that are made usually the pure purpose of either spam or abuse. So it's really easy for someone to keep opening up new accounts and just send harassing DMs to this poor blob.

How are they feeling? Well they're feeling powerless in this situation. They're actually fearing for their own personal safety and maybe even others around them. We've seen violent online translate into the physical world quite often. And this could very well be one of those situations, especially if it's from someone in an abusive relationship. So what exactly does success look like for this blob? Well, if we address the problems of abuse at scale, support needs to have the right tools to swiftly take care of that. I have like a nice little band hammer going on. And we need to have the tools to minimize the impact of such things for our users such as blocking, maybe even letting users turn off DMs, that kind of thing. And they feel a lot more safe and secure and won't leave your platform.

So you're probably wondering, "I don't draw." Well, it's totally fine if you don't draw. Here's a mini summit for the Community Safety Team, where I forced a bunch of engineers to use markers and actually draw out the user journeys. We did a little bit differently where we centered our user in the middle and went outwards in circles to talk about their problems, how they feel, and what success looks like. So it's actually really great time for us to get a better understanding and align on what the biggest problems are for our teams.

As I mentioned, they're really great for aligning your features vision, but also sharing specialized knowledge with other teams that might not have this context before. And it also helps validate really quick decisions. If you're working on other features that may touch on abusive relationships, you can always refer back to that user story and validate, are we moving in the right direction? Sorry, I'm flexing my design skills right here. But this is another example that you could use on an iPad to really go through an entire journey. So I made this one for open source maintainers.

And what I actually really love about doing these is that it highlights the gaps in your product, where you are lacking. As you see in the bottom, if unsuccessful, if an open source project becomes unsuccessful and a lot of that is due to burnout. It really told us that, “Hey, maybe for this OKR we should focus on solving maintainer burnout.” So it's sort of like an Oracle map where you understand the parts that your project is lacking, your team is lacking, and how you can improve.

Build with Safety Principles

So let's say we've got all of our amazing user stories for several different abuse cases settled. We can start building with safety principles. I really urge you all to define what safety means specifically to your users. I don't think it's necessarily the same for every platform. Yes, I think there are some foundational principles that we can all adopt, but it's going to differ depending on your user base. I guess I have a few examples for that. So maybe user safety for you means protecting people's private financial information, maybe it's making your users feel safe when they're in a stranger's car or home. Or maybe it means having ownership of your personal content like video streaming, uploading artwork, that kind of thing. So really, it can differ from team to team.

They can totally be aspirational too. I don't think that you should be confined to what you're already doing. And if you can look to others and think, “Hey, I want to be just like this. I want to make sure that I'm securing our users' data”, then totally go for it and start building your features to work towards that goal. In the context of GitHub, we've defined user safety as ensuring everyone can participate in communities and collaborate on code together, without risking personal safety, well-being, your privacy, regardless of your background and identity. And this has been sort of like a guiding foundation for how we each make decisions and choose what to build.

In particular, relative to GitHub's work, we focus a lot on making sure we're helping both community maintainers and potential contributors to collaborate through a text-based medium, which turns out is very difficult. And making sure that we're delineating users like personal identity and corporate identity when they commit code. Are we exposing private emails that they wouldn't want their employer to know about? That kind of thing.

I'll include some of the starter principles that I was talking about that I think can help you start incorporating these principles into your own work. I have for here the first one being asking for consent, encouraging inclusive behavior while discouraging destructive behavior. Minimizing the impact of destructive content, and leaving a paper trail. So we'll go through all of these, and I have some more Blobbo examples, of course.

The first one asking for consent. I wanted to start off with a quote from Consensual Software, and if you have more questions, Danielle is here. She is the founder of Consensual Software. Consensual Software asks for users' explicit permission to interact with them or their data; it respects users' privacy, and does not trick or coerce users into giving away permission or data. And what I also really enjoy is that Consensual Software asks for permission first rather than begging for forgiveness later. Which is very contrary to a mantra that I think is quite common in tech and perhaps in life.

Having that role explicitly called out is pretty great. There's a link there so you can totally check it later. But I think it's important in the context of technology because it helps your users feel in control of their experience on the platform. Especially for actions that may leak private information like location or exploit notifications. You should always ask for consent for users to opt-in to either features or workflows.

Blobbo is working on a first iteration of the DMing and taking this asking for consent model in mind, Blobbo adds in a model that tells you who is trying to interact with you with a link to check out their profile and make sure that they're not someone malicious. I could be a sock puppet account and offer some trust ahead of time. You can accept the message or delete it. But even better, Blobbo adds in the way to update your settings so that you don't have to receive DM's from people that you don't know or who are not in your trusted networks.

I'll throw in some GitHub plugs, things that we've done. I think one of my favorite features that we worked on was the repository invitations. Prior to me joining GitHub, you could actually add anyone as a collaborator to repository. Meaning that if I made a repo name called "Blobbo sucks" and invited Blobbo to it, they'd automatically be added. They would get notifications for issues in that repository. And it was possible that you could co-author commits in there and actually alter Blobbo's contribution graph. People can actually spell out things, actually a real abuse vector. So we closed it out by having you accept the invitation explicitly. And we actually edited something recently to show you what data the repository owner can see. So it's very upfront and you're consenting to sharing this data. We also added a way to block the user from that flow.

Consent-driven design can also help users make more informed decisions when they're navigating, they're interacting with other users, content, or perhaps sharing personal data. So some examples here. I think it's a GIF, oh it is, yes. You can actually opt-in to seeing certain comments; we allow maintainers to hide with a subset of reasons that we've chosen. So you can hide comments as spam, abuse, off topic, resolved, outdated, and we give a little description of what you can expect by opting in to view the comment.

We also add in a banner that shows you- before you are deciding to use or contribute to repository, we surface which users you've blocked that have already contributed in the past. And you can actually opt-in to seeing this, you can turn off those warnings in your settings. But I always think back to the user story of someone who's trying to escape abuse, if this person that they've blocked is someone who was abusive to them in the past, they might not want to contribute. And this is just an extra way that we're ensuring that they feel safe contributing no matter where they are.

Another thing that we added pretty recently is allowing organizations to claim the domains that they list on their public profile. So their email, their websites, and we give a little badge to give that trust upfront just in case someone is trying to spoof atom with a zero instead of an O, that does happen. And we want to make sure that users are consenting to the software that they actually want to use and contribute to.

The next principle after that is encouraging inclusive behavior while, discouraging destructive behavior. Because every feature can be abused, we need to ensure that there is the appropriate friction to discourage that sort of behavior. I think it's really important that we still encourage because we want others to feel like they can be a part of the communities that you're building without risking their personal safety. But making sure that we're equipping people to be good online citizens and encouraging the right kinds of community behaviors that we want to see to make things more welcoming.

We'll talk a little bit about how Blobbo is trying to work on this. In particular, everyone loves integrations, especially GIFs and stickers in their private messaging. But one way that Blobbo wants to encourage more positive interactions with GIFs and other integrations in content like that, is curating more happy categories such as cats, rather than dropping them on a free form text box to try and search. And this helps you encourage the type of discourse that you want on your site.

In a similar way on GitHub, we actually have a subset of emojis that we curated when you are reacting to an issue, comments, that sort of thing. And originally, we wanted to reduce the noise of plus one comments. We obviously tried to encourage the thumbs up emoji instead. But as you can see, we don't allow other people to add in other emojis as reactions to these comments, otherwise they could very well be all like eggplants or something. So this is a great way for us to restrict what kind of behaviors we want to see in a collaborative code workspace. This works for us because the context is productivity. It may not be the same if you are working on more of a social network.

Another feature that we've done to encourage more positive interactions is adding a first time contributor badge when someone is a first-time contributor to any sort of community. This has been really successful because it signals to maintainers what kind of tone that they go in when talking or doing code review. Perhaps this person isn't up to date with the sorts of etiquette in this community, and as a maintainer reviewing, you can have a friendlier or welcoming tone, maybe point to documentation, that sort of thing. Or use bots to parse if this is a first-time contributor; have the bot comments with a bunch of different contributing guidelines, resources, Stack Overflow links, that kind of thing.

We made some recent improvements to block user flow in an organization. So now it's much easier for you to temporarily block someone, and you can also send them an optional canned message. It's on a free form text form. GitHub, our Community and Safety Team, we curated that message to link to the code of conduct if it's there, and link to the content, the offending content that they're being blocked for. The reason why I really enjoy this feature is because we're encouraging rehabilitation on the platform. Studies have shown that if users know what content they were being banned or blocked for, they know why it's important to a community to discourage those kinds of things, they will actually change their behavior, not just on GitHub, but across other platforms as well. We've seen the social capital on GitHub being so prominent that people will actually change their behavior on Twitter if it means that they can still have access to GitHub's resources.

The next principle I would like to share is you should minimize the impact of destructive behavior or content. So I mentioned earlier that any feature can be abused, especially at scale, but the way that you can combat that no matter what ,is making sure that you have a team who can react swiftly, and you give users the tools to manage and mitigate that abuse. We can design a tier of tools ranging from the least to most nuclear options to deal with any sort of abuse that a user might experience. For example, in Blobbo's direct messaging app, you can see that there are tools on both the conversation and the user and content levels here, ranging from least to most destructive. So being able to report and block are ways that users can protect themselves if something were to happen.

You can see that some of the links are in red. We actually worked pretty closely on making sure that that was intentional, making sure that the most dangerous action is extremely salient, but not so much that it prevents you from doing the workflows that are going to be used more often. And I mentioned, similarly on GitHub, we actually divided the types of actions within the comments more options kebab menu with the moderation actions at the bottom.

We also have temporary interaction limits that are actually a more proactive way of preventing dogpiling. Dogpiling is when a group of people actually just bombard either your content. Usually, the sources are like Fortune, Reddit; they'll post a project and you'll see a lot of abuse happens on GitHub via that. So we actually added some limits to which types of contributors can interact in that time span, and we kept it 24 hours to make sure that we weren't preventing communities from only restricting them all the time.

Lastly, you should always leave a paper trail so that not only your support team knows what's happening and can investigate abuse reports, but also you can leave them in the UI to foster transparency within people who are collaborating together. Here in the Blobbo example, we can show when something was edited. Just in case someone had said something like, "Blobbo sucks" again, and they edited again and you actually weren't able to see it in time, we have a timeline entry for when something was deleted and when.

Similarly, on GitHub, we added a comment edit history not too long ago, in case this exact thing does happen, cat references. This leaves a really good paper trail, not just for abuse vectors but actually for your workflow in case you need to look back on old conversations about decisions that were made.

Bridge with Acceptance Criteria

Lastly, once we've tied all of those principles into the work that we are designing, you can bridge all of that with acceptance criteria. So this was an initiative that one of our quality engineers, Michael Jackson, on our team has been spearheading. I personally like it because it ensures that what we are designing will actually be built by our engineers and that we're not missing any crucial conversations. So we can take those user stories that we had drawn on paper or iPad or whatever, and actually write conditions for a features functionality so the entire team can agree and understand what will be built.

Here's an example that Michael wrote on blocking users. I enjoy this because, first of all I can read it really clearly, it's not like a rails test or something. I can actually give feedback too and make sure that what I've built in my design workflows matches up to what will be built in engineering. So this really closes up the feedback loop and makes sure that we're all on the same page.

Scale with Documentation

After all that, I think it's really important that, especially if you're the only team that may be thinking about safety in your company, is to really scale these processes with documentation. So I started writing up design guidelines with codifying our principles, linking to research, that sort of thing, and as well as style guides in our team repository. And that way it's really easy to link these things when I'm doing a feature review for someone else, and that sort of frees up our time so that we're not just consulting all the time, to actually work on more proactive features that are encouraging better behaviors.

For example, this is the principles guideline and this is a more tactical style guide. I showed a screenshot earlier of the kabob menu with all the actions, but it's super easy to turn that into a kitchen sink with all sorts of dials. So this is a guideline on how to build menus more responsibly, which seems really silly, but it actually helps unite everyone who might work on this feature in the future. Even if I'm not around to do a design review, this is something that I can point to. And it offloads a lot of the burden that sometimes we can have doing consulting for the entire company.

So we chatted about understanding with user stories today, building with safety principles, bridging with acceptance criteria, and scaling with documentation. Hopefully, these are things that you can take back. The slides will be available afterwards, and apply these to your processes on your teams. I really, once again, like to urge that user safety can mean different things to different companies, and if you can get an understanding of the types of things that your users want in order to feel safe using your platform, I totally urge you to do that, even if you don't have like a dedicated research team because I think we should still continue to prioritize humans in the technology that we're building.

I think we can move towards standardizing and open framework for talking and building about these things. We have so many other frameworks like Agile or whatever, but I would love to see us moving towards building safety into the foundation of everything we build too. And I would love to hear everyone's thoughts about this. I believe we have Q&A right now. You can find me on the medias here. Once again, my GitHub handle is not on brand but it's fine. But questions?

Questions & Answers

Moderator: We have time for a couple questions. And reminder, sorry if you've heard this before, but all questions must start with a question, no manifestos today. Can I get a question?

Participant 1: Are there any social networking, community tool, open source projects that you wanted to implement safety into a social network that you were building new?

Fukui: Sorry, the question was like of existing social networks or …?

Participant 1: My company, University of Arizona, we have a forums tool but we have no safety. So I was just wondering if there were open source projects which would make it easy to integrate?

Fukui: Yes, perfect. Okay, there is the Coral Project. If you haven't heard of it I highly recommend checking them out. They are all about open sourcing tools to help more of discourse in conversation. They not only do hardcore research around behaviors around what makes a community successful and what can damage a community, but they actually have tools that you can use in your own forums. So Coral Project.

Participant 2: What other approaches do you consider to encourage positive behavior? For example, when someone gets blocked or reported, does the person get some sort of … consolidate a list of all the places that they have been blocked, so they can start rethinking the way they communicate with the rest of community?

Fukui: That's a really good question. So in that particular case, they get an email notification. And whenever they try to interact with the community, they will get a little banner that prevents them from commenting saying, "You've blocked for two days because of this content that you wrote, and here's the code of conduct." But it would be interesting to see a consolidated place and how that could encourage people to rehabilitate themselves and put the onus on them to try to make amends with other communities. I don't think we've thought of that but I'll think about it.

Moderator: We have now.

Fukui: We have now. Thanks.

Participant 3: You mentioned safety on a platform is everyone's responsibility. I work on the platform side, so platform engineering at Twitter where a lot of abuse happens. What would you say are things people like me who don't work directly on the product can do to I guess, to encourage safety and help?

Fukui: Being a product designer who has allies in other teams that may not be on the same team, I think the biggest thing is to elevate the people that are doing that work, maybe in ways that don't directly mention them. So surfacing docs that have already been written, those sorts of guidelines can be really, really helpful. That way you're able to first educate other engineers about what it means to integrate safety and how to do that.

But I think it's also important to make sure that everyone has buy-in to that. I think it's one thing to know the strategies to actually solve these problems, but I think it's still important that we're doing the work as allies to convince others that it is a problem and that it's something that needs to be fixed through setting really clear boundaries about what should be acceptable on a platform and what shouldn't. And I think if you're all in agreement on that, that's a great start. And then the strategies for building can come later.

Moderator: If I can add to that. I think another way is if you are more of a backend engineer on a platform, another thing is you have control over the data and who accesses it. Another part of Consensual Software is that you also limit who has access to that data and for what time period. So GDPR does play into that. But making sure does everybody within the company need to have access to everybody's data? The answer is probably no. So doing an audit of who has what access to what data in what logging system, is something that's really actionable that you can do today. Any other questions?

Participant 4: My question is, in your experience designing these safety features with your product teams and engineering teams, have you incorporated these features when you're building the minimum product or the minimum feature, or are as a later add-on? How do you prioritize, and do you have any experience with that?

Fukui: Very good question. I think the benefit of being on a Community and Safety Team means that we can incorporate these principles from the very beginning. Even for the MVP all the way up, and the way that I like to work is think actually really more big picture, “This is the vision that we want”; it might take five years, but whatever, let's all agree that this is a future that we want. And then scale back down to the MVP, the first iteration that we're going to get in front of users and get feedback on. I know that other teams sometimes don't have the opportunity to do that, and sometimes abuse vectors will slip out as soon as something happens.

In our experience, the way that we mitigate that is just quickly acting upon it as soon as possible. So if an incident does happen like a security leak, we are roped in from the very beginning. Either maybe we're pinged on a specific group that's in charge of dealing with security vulnerabilities or something like that. Making sure that we are involved when any red flag, red signal happens.

Moderator: We also spent the first two years of the team's existence covering TechDat, so we’re now in the third year, so now we're actually building the stuff that we want to do. So TechDat does happen. Questions? One more over there.

Participant 5: Talking about that blocking feature, reporting users, do you have any experience with nonviolent communication and have found a way to incorporate that into the platform that actually more or less educates people to talk in a more positive and an open way?

Fukui: That's something that we definitely want to tackle, and that's sort of the gray area. It's not explicitly abuse of content, it's not like calling names or anything. It's those sometimes tense conversations and I think the way that we have been thinking is that prose communication, especially asynchronous on something like GitHub or whatever, takes up a lot of mental energy and it actually leads to a lot of maintainer burnout.

Personally, I would love to explore other ways of making that easier to communicate via prose. I'm not sure how to do that, but I think it'll be an interesting next chapter in our third year of Community Safety now that we can be a little more proactive about things. Now that we've built the foundation of dealing with abusive content, now we get to raise up, level up, and start making more inclusive features like that. That's a good point.


See more presentations with transcripts


Recorded at:

Mar 01, 2019