An Important Announcement from Garbo
Season 2

The Business of Trust and Safety with Noam Schwartz

In this episode of Reckoning, Kathryn Kosmides speaks with Noam Schwartz about trust, safety, and prevention in the tech industry. Noam is the CEO and Co-Founder of ActiveFence which provides a solution that enables Trust & Safety teams to be proactive about online integrity so they can keep their users safe from online harm across content formats, languages, and abuse areas.

In this episode, Noam discusses:

  • How he built ActiveFence
  • The history of trust and safety on the internet … it’s been around longer that you might think
  • Why companies are investing millions of dollars into trust and safety platforms
  • Proactive vs. reactive online harm prevention
  • Proving ROI of trust and safety to apprehensive customers
  • Where to start as a small business looking to implement trust and safety technology

You're listening to Reckoning, the go-to resource for conversations about gender-based safety, survival, and resilience in the digital age. Reckoning is brought to you by Garbo. Garbo is on a mission to help proactively prevent harm in the digital age, through technology, tools, and education. I'm Kathryn Kosmides the founder and CEO of Garbo and your host for each episode. In the interest of safety, I want to provide a content warning for listeners as we do discuss some hard subjects in each episode. So please use your own discretion when listening, you can learn more about Garbo and our guests by visiting our website at Thank you so much for being here and listening to this episode.

Noam Schwartz is the CEO and Co-Founder of ActiveFence, a fast growing trust and safety company, pioneering the proactive approach to online integrity. ActiveFence supports tech platforms around the world in the fight against disinformation, hate, terror, child abuse, and other crimes online. Noam has over 15 years of experience in the fields of open source intelligence, security, and data science. Prior to ActiveFence, he founded and served as the CEO of TapDog, which was acquired by SimilarWeb. Noam began his career as an intelligence officer in the fields of research and counter terrorism in the Israeli military and holds a law degree and MBA from BIU. Our conversation today will cover the growth of the trust and safety industry and why online platforms are finding it so difficult to get it right. We'll dive into big tech policy versus regulatory mandates and the actual cost of trust and safety online.

KATHRYN: How did you found the company? It was a very interesting story. I know we touched on the first conversation we had of how this whole thing kind of started.

NOAM: The company was founded in 2018 officially, but I started really working on that and thinking about it and believing in it in 2015. In July, 2015, when my first daughter was born, in Lenox Hill Hospital in New York, not far from where you are right now. She was going super early, 26 weeks early, less than two pounds. We spent [a] very, very long time in the hospital over three months. And I had a lot of time to think and I was just starting also a new job. I was just selling my previous company to another company and starting a new position. I feel bad that I'm not working and just sitting there. So I brought some work to my new office in the neonatal ICU and what I did was going over the results of text classifiers. I was trying to understand the content of public libraries of files that were shared online. I was going over the results one by one and when I got to one of the high numbers, I came across the folder in one of the biggest cloud companies in the world. Came across gigs and gigs with pedophile content. I remember that, that moment sitting there in the NICU, literally back to the wall, facing the children. Those tiny creatures, they're all like, two or three pounds tops. And something in my mind snaped. It was intolerable and immediately reached out to the CEO of that company, a public  company that I knew from my days in Silicon Valley and asked what's going on? How can it be? How could you let this happen? And he answered right away, like after five minutes, which is insane for that person and said "Thank you so much for reporting. We're on that. We're removing it." and he also CC'ed the trust and safety team. Now, it was the second time in my life that I ever heard the term "trust and safety". The first time was immediately after I got  out of my army service. I wasn't counter terrorism intelligence analyst for a while, but when I first stepped into tech,  I was a trust and safety analyst for a big phone company. I knew the language. I knew the materials and funny enough, not a lot changed since then. And that was 2008. It's been awhile, but [we're] still using very similar tools, similar technologies, very similar workflow, very similar safety measures. So I figured from having the conversation with the CEO and their trust and safety team, there wasn't really a lot they could do. They didn't have anything proactive due to all kinds of privacy reasons. They couldn't use any kind of passive protections, any kind of protection to APIs. They even weren't connected back in the day to network. I remember that I was shocked. I like, 'how could it be?', but that wasn't my issue. I was with my daughter, with my wife. I wasn't really focused on that and I had like a decent job. And then in 2016, when the whole disinformation campaign  was getting results and in 2017, when the ISIS Al-Qaeda really started bombarding the social media companies and the other data usage companies, they had had enough. They felt that it's time to do something about it. Me and a bunch of my old co-founders from all from previous  companies, we started ActiveFence which is a proactive technology tools set or content for trust and safety teams to use to manage the entire workflow. So we're offering that to trust and safety teams in companies with a lot of user generated content, but user journey content is everything from text to ads, to chats, to files, to videos, everything that is ever brought up. Any you can type into gaming, hosting streaming services, really everything. The proactive way to find malicious content- that's across child abuse, grooming, bullying, violent extremism, disinformation, hate speech- everything across every format and across over 70 languages. And everything is done in a transparent way that also makes sure that moderators are doing it in an effective and also without bias. And it's also protecting their own safety. So what we're thinking safety by design, which is a huge turnout, that your the product manager, the company, and the company builders are supposed to implement safety measures, right? In the, process of actually designing and developing the product. We also believe that you need to protect the moderators, protect the trust and safety team, so you need to minimize the amount of content that they're viewing, to make sure that they don't need to see the same types of content twice and make sure that they are also safe.

KATHRYN: So let's kind of rewind a little bit. You mentioned online trust and safety and getting your start in this industry at 2008 and then revisiting in 2015. A lot of people say the online trust and safety industry is only about six, seven, maybe ten -if you're lucky- years old. Can you just talk a little bit about the history of online trust and safety and how it's being revolutionized in the last two years or so?

NOAM:  Trust and safety, in its current form is as old as the internet itself. From the old days, since you had any ability for a single person to post online on a private server or for a university somewhere in the world, you had trust and safety issues. They weren't an issue back then. It started when a lot of people started posting content and started shaming each other, or starting posting content that was not acceptable on the host, everything that also led to the creation of the Sexual Assault Section 230. The whole information lawsuits, which is a very interesting case that we don't need to get into it right now. I get some materials to whoever wants to read the long articles about it, but since people started posting online, there were the equivalent of trust and safety graphs that handle that. It's kind of like an evolution of customer services. So in the beginning there were indexes, like Alta Vista and someone posted the link that was not supposed to be there, the team itself checked it and then removed it alone or someone flagged it and then another team member went in and checked it itself. There's a lot of public publications about how Facebook did it in the early days, where the team members took two shifts to remove content when it was just them doing the moderation without the [inaudible] in the very early days. Things kind of work, but as the internet gained a lot of traction and a lot of scale that didn't work anymore and finding more and more people on the problem didn't work. So that was when the first Naive Bayes classifiers were introduced to find all things that were only finding keywords, all kinds of words that you didn't want on a platform, but then more people were needed for solving the problem. Then image detection capabilities were were added, but those things were never enough because in most cases, companies have their own set of policies. And even nudity is not the same for every company. One company would be more tolerant towards one type of nudity or even with hate speech or all this disinformation, which is new, relatively in the public mind, at least. And those things were mainly developing house, in many cases. There were a few vendors that introduced a few point solutions that they used as a secondary tool, but there was never a real meaningful way to help trust and safety teams' management day to day. And I think, to answer your question, only in the last 5, 6 years the public started paying attention. The internet grew and grew in an amazing way. In the first year, it was the inventions. Everything was invented, e-commerce was invented, chat was invented, social networking was invented. It was incredible. And then it started being more popular and people started using it all the time and their online identity became their identity. I grew up online. Giving in how much of a geek I am, my most important online interactions going up we're on ILC and on gaming chats and on steam. That was a very important part of people's life, but then it became deeper. Social networking became deeper. Communication online became deeper. I got the first nanny to my daughter online, and we communicated and my business was started online. Everything was online and the same sophistication that they were experienced online also happened to the bad actors. Bad actors  were started the early days. There were all kinds of trolls on BBS servers and that was very long ago- it was in the nineties. There were attacks and doxing and takeovers of channels in IRC and in other places, also in the nineties, in the beginning of the millennium. There was booming and there were networks that were distributing child sexual abuse materials and incitement and term reporting and disinformation was everywhere. But again, only recently became sophisticated and so decentralized and so how to detect that it really became a immediate serious problem. Nobody really cared when it was small, but now that everybody get to experience that, all of a sudden we don't want it and think it's especially painful when it gets to kids. Kids are getting online so early these days and I think the most heavy users of the metaverse games and network are probably really young teens. The effect that this has on them and the amounts of dangers for these ages is just insane. People notice and things like vice church that got everybody's attention to what happened when that terrible attack was streamed and everything around the disinformation attacks really moved the needle in the eyes of the world and here we are.

KATHRYN: I touched on, when we first met, my own experiences as a young person on the internet and my first boyfriend was an online boyfriend. It was very innocent. We would just log in at the same time and play games together, things like that. But then there was a lot of grooming and we didn't have these terms back then. The word grooming is a relatively new. New term, not a new idea, but a new term. I think what I appreciate about you is coming from a place of purpose, a mission driven place of wanting to. For me, it's wanting to prevent what happened to me to other people. In you, it's very oriented on your family and your own children. ActiveFence recently announced a 100 million in funding. You're starting to see these huge investments in trust and safety platforms, no longer in-house tools, but actual platforms doing this kind of work for other companies, especially if you look at just the last 24 months, we've seen a ton of investment in trust and safety. Why do you think this is happening? Is it because people are finally paying attention to it or is it regulation? What's the reason that we're seeing this huge investment in this area?

NOAM: In the last few years, my team actually calculated that there were more than $1 billion invested in trust and safety world wide, which is very meaningful sum for this emerging market. I think that's driving it. And then there was also M&A, for those of you who follow the M&A market and a market fall for trust and safety. I think what's driving it is two main understandings. The first one is that the current way of doing things is just not scalable. It's like adding more and more people to the problem just creates more problems. It's very hard to sustain this type of a solution in a way that makes sense financially because the companies are still thinking about the bottom line and just adding thousands and thousands and tens of thousands of people just doesn't make sense. They also leave. It's very hard to train them. It's very hard to do the Q&A. It's really hard to make sure that their decisions are not biased. It's just so many different things [inaudible]. This is a wonderful discussion for this industry because it brought a topic that was not discussed enough is you should handle topics that are controversial. There's no alternative truths. You don't want to work with alternative truths. So the whole thing is making sure that you have a transparent process. Let's say that you're trying to remove bias as much as possible. You can have decent Q&A, both quantitative and qualitative data is key. The understanding that this is not scalable is getting to the minds of trust and safety teams, but also executives and board members in those companies. Also, the costs are insane. Sending every piece of content to a video algorithm to understand what's going on in the video, in the image, the costs are very high. The amount of false positives are very high. The accuracies is not great. There's a lot of adjustments that need to be made. Companies eventually want to develop this in house. They realize that every time they want to change their policy, they need to shift another team of scientists to also change the algorithms,  which for some companies it makes sense if it's code, but for most companies where it's not code, they just don't want to keep doing it more over and over again. There's the moderation platform that also needs a lot of adjustments. When you're getting bigger and your team is getting bigger, you need to maintain that. So there's a lot of companies out there that have some sort of legacy software that they are maintaining, and it's so painful to maintain them and to add more and more teams to that. And especially now, when markets are not what they used to be, margins are so important and unit economies are so important, companies should focus on what's called to them, go the community, make the users happy, make their partners happy and not build over and over and over the same thing. I think those are the two key components, just the existing solutions are not scalable and they're too expensive.

KATHRYN: And they're often very reactive. Content moderation has traditionally been someone reports someone, and then someone reviews it or it's reviewed through AI, or whatever it is, and then a decision is ultimately made. And ActiveFence is all about being proactive, this proactive detection and I think we're seeing that shift. Garbo is also very focused on proactively preventing harm. Where do you think companies are investing in this kind of proactive prevention rather than this reactive content moderation side of things?

NOAM: So why I think they are using this type of approach is that, you're not waiting for something bad to happen. You're trying to prevent it from happening and minimizing the chances of something bad happening. I'm a big fan of this approach. When we're thinking about our proactive approach to  content detection, it's not necessarily the driver for the goal of the market. I think it's more of a differentiator for ActiveFence. We're telling our customers that they can choose if they want to wait for something bad to happen, or they want to proactively find it  and proactively protect the community, and the users, and the partners, and the products, and bringing more safety forward because taking this approach eventually is making sure that you can learn from all of the bad stuff that are happening online and not ever see or wait for something bad to happen in a platform. From our perspective, if there is a new fraud scheme that is happening on some fraudulent form on the dark web and the same thing is happening on one of our customer's platform, we can automatically detect it before it causes harm. So we can reduce all of their KPIs with them, but we can prevent them from hurting someone like some of their users. I really believe that everybody who is taking a proactive approach will eventually win, also in the hearts and minds of users, because people would just feel safer, more protected and they can trust their product and platform. It's inevitable, but it's also evolution. It is very similar to cyber. Cyber was super passive in the beginning. Well, that's a waste to someone to try and get to my server. Now, it's way more proactive. You're trying to find predators and trying to find persistent  bad actors, you're trying to find all kinds of bad actors that are trying to attack your network before they cause damage. And we're taking a very similar approach, it's not a new idea, it's just an evolution.

KATHRYN: And evolution takes time, and I think we're at that tipping point right now where you said, when you have safety features, especially proactive safety features, you will win because people feel safer on your platform. There used to be this mentality: a frictionless onboarding experience, get as many users as possible, this growth, all costs mentality. We finally realized that those costs were not financial. They were real human lives. That's what ended up being horns in this process. We're starting to see this shift away from this move fast and break things mentality where those things are humans and we're starting to see the safety by design concept and safety by design principles. You mentioned this a little bit earlier, but what is safety by design and how would you define it and how do you see companies implementing it?

NOAM: I think it's too soon to call how that will actually come into effect because it's a new concept, wonderful concept, but relatively new. And we're seeing that a lot of companies have been building the trust and safety teams before they have users. And that's the most important effect that they see out of that thing. And we have customers that are prelaunch, no users, and we're helping them design their policies. We're helping them design the actual workflow. We're implementing our classifiers and algorithms. Their using all of our safety tools. And they're ready. And they're looking at trust and safety in the same way they'll be looking at cloud computing. They need the infrastructure to run their business. Trust and safety is part of that infrastructure. There's no user generated content without trust and safety. This understanding was not there two years ago. If last year we had two customers that were pre-launched, this year we'll see way over. And that's from streaming, from social networking, from gaming, from crypto gaming, to regular social gaming. Everywhere. And this is a wonderful trend and everybody wins. The user will win, society will win. The building safety in the development processes is wonderful. And also it kind of stops this weird thing that is happening when you're, when you'll know if a star KPI is running over the safety of people. As you said, one goal is above everything and goal should always be above everything, goal eventually is revenue and whoever starts the company they have share advice for the shareholders, they need to make the company profitable and successful, but they also have responsibility to their customers, their users. And in these types of [inaudible] really help balance everything out. So we can still be super profitable, and get tons of engagement, and have the one most wonderful content discovery algorithm  in the world, but still make sure that no one's getting hurt in between. And to make sure that if someone does share revenge porn, and it's very popular, is that worth it? Probably not. So I'm very proud to be part of that change. And you should be too.

KATHRYN: Ah, thank you. It reminds me, we have a board member, Rachel Gibson, and she's worked at non-profits at the center of gender based violence and tech enabled abuse her whole career. And she goes, 'I can't tell you how many platforms have built something and then asked me to look at it after it's launched or it's about to launch, but it's already built.' and she was like, 'Kathryn, you were the first person who ever came to me before they ever wrote a line of code to say 'Hey, let's think about how this could actually go wrong.' That is safety by design. We are also building these principles alongside the people. And I think the very first thing that we do when we look at a new feature for Garbo is, 'How could this go horribly wrong? How could this be weaponized?' and you have to think like a bad actor. It's so hard cause you're always like trying to prevent it, but sometimes you have to think like a bad actor to figure out how different features can actually be weaponized. I am really excited about this shift of seeing companies say, 'Let's think proactively instead of reactively about safety.' We've talked a lot about big tech and there are the big social media companies, we all know the big online dating apps we know, gaming companies, whatever it may be, but there are hundreds, if not thousands of smaller online platforms, to connect people that are popping up. Whether that's new gen Z dating apps, like Snack, Lolly, or So Syncd,  or new social media platforms like VOLE or IRL, they're all very concerned about trust and safety on their platforms. They're trying to do it proactively even prior to launching, but it's definitely not easy because trust and safety isn't cheap. It can be quite expensive. When you're looking at the cost of trust and safety on your platform, how do you explain it to companies? This is something we're doing all the time. It's better to almost invest in like insurance, that proactive prevention, the risk mitigation side of things, rather than the reactive side of things, but there is a cost and they're like, 'Well, it's so expensive.' and I'm like, 'Yeah, but one lawsuit or one bad interaction could cost you hundreds, if not millions of dollars,' but how do you kind of get them to make that shift in their head?

NOAM: The answer for that is different every two quarters. When we just started, we had this thinking of what is the compelling event that would cause the company to either adopt new trust and safety solutions, upgrade their trust and safety infrastructure- to do any change- while they're speaking with us. In the beginning, we thought it's a crisis. So there's a bad PR or something like that. That wasn't a good thought because when there's that PR or one of those companies, it's heartbreaking because in many cases, something is being exposed. This is just encouraging other people to take advantage of that thing. It's very different from Cyber Alice that are exposing the vulnerability. The software usually tell the companies that they'll expose the companies, fixing it. And then the analyst is advertising and sharing what they found and presented in conferences. It's like a very developed industry. With trust and safety, many cases there's like "gotcha" moments and reporters and just like people out there. are shaming the companies. I saw chatter in doll communities saying, 'Hey, we saw in this article, this new loophole, that we can actually abuse this product.' and by the time that the company is closing this, more was created. I really didn't like that the competing events and PR and as time went by, I saw that in many oPIs and KPIs in companies, there are actual goals to increase the safety level of the companies and many different companies are quoting many different names. I don't want to, mention the specific names of the KPIs, but this is a goal for a lot of folks in a lot of companies. And they're compensated sometimes by increasing those numbers. And we're seeing people that are reaching out to us saying, 'Hey, I need to improve the safety on child sexual abuse material, on violent extremism, or disinformation, or whatever. And there's someone saying, 'Hey, for this game, I want it to build safer data.' or 'My classifier is broken, or they decided they didn't want to keep up in the scale.' They're already very versed in what they need to do in order to achieve safety in itself. The companies that you mentioned that have some very sophisticated trust and safety teams, and they were all over the place. Even though senior managers of trust and safety is very brief, there's not a lot. And now, it's an amazing opportunity. I'm telling this to my team all the time. Whoever will invest the time and really understand the industry and really have the sense and feel to the terms and learn how to actually do trust and safety. That's the new cybersecurity. And, it really pays off. A lot of folks, they'll just know, starting their careers in trust and safety, I'm sure we'll see them in very important and high positions in companies, especially in this new era of, Web 2.0. This is so exciting. I can see what happened, but to your question to prove our why, it's been awhile since we had to prove our why in doing trust and safety, we need to explain why we're better than a specific solution. And doesn't defend our differentiator and explain specifically why this is better, but in most cases because of the proactive approach, because of solving so many problems, so many abused areas, so many languages, all formats providing both the software that is leading, managing the trust and safety operations, and the automatic content moderation API, and some of the services that helps investigation teams to really see CB on coders and find anomalies and have an understanding of trends. It almost became, became easy.

KATHRYN: What I am seeing is also a lot of regulatory push down of transparency reports coming to the UK, and soon, I believe coming to the U.S. And transparency reports present a lot of problems. We know these platforms have a lot of problems and so now we're seeing that shift to, "Oh, shit, I have a lot of problems now I need solutions.' and they can't just present the problems and then say, 'Okay, and here's how we're solving it, or here's what we're doing.', and so that's I think another reason why trust and safety is really being adopted as an industry. You're seeing things like TSPA, the Trust and Safety Professionals Association, pop-up to de-silo trust and safety. You said there's only a few people and I wouldn't say anyone is an expert. It hasn't been around long enough for anyone to be a true expert in trust and safety and it's so nuance too and gray, etc., and transforming, with Web 3.0 and things like that. But, how does de-siloing information to help these new companies who are trying to be proactive in safety, but really don't know where to get started. Where would you say are the three to five places, a new company should really invest in trust and safety. What are the technologies or the tools that they should focus on because obviously there's so much you can do. It's content moderation, it's AI, it's the detection tools, it's background checks, ID verification, voice detection; there's so many different things that people can do, but what should small companies focus on?

NOAM: So, first above all: regulation. Regulation is coming, it's already changed in many different countries of Europe. In the UK, it's probably going to be the most interesting regulation. In the EU, we have something broader. It'll would probably take time until we get to the U.S. We already saw some regulation about sex trafficking a few years ago, and now there will be something that will regulate what would happen to platforms without scanning for CSAM, but regulation is now a driver of adoption of trust and safety technologies. There's no question about it. For smaller companies, I don't think they are as concerned about regulation. I think they're more concerned about perception and about god forbid doing the right thing. I wouldn't start with transparency for smaller companies. We're seeing a lot of companies adopting the concept of a transparency report,  consumer safety report, or an annual transparency  report and that's awesome. There's no a single format that is right right now. I think a lot of companies that are using it as an to show that they care about trust and safety, which is also an amazing trend. But specifically to your question,  a company that is starting out, what they need to do really depends on the type of contract. If you're a dating site, probably the first thing that they would do is implement [inaudible]. That is basic, who this person is, let's verified their identity, let's see that they're not a sex offender or anything like that because this is the most important thing. But generally speaking, a company that has some sort of user-end content, the first thing they want to do is to implement some basic tools that will allow the users to take an active part in guarding the community. So "report", that's the basic report button that sends the the content to a moderator. That moderator, it can be one of the founders if you were just getting started or one of the employees, but that's like the basic of the basics. Let the people on your platform self regulate in one way or another. And that will take care of most of the early problems. Small platforms don't have a lot of trust safety issues in the first days. Once there's a lot of reports, you need more people on the migration platform. You need to figure out what to do to prioritize. You need to get hundreds of hundreds of flags every day. You start thinking about ratio, if something got more flags and you start implementing some sort of identity management, like who is this user? That's a new user and he posted 50 new items in day one. Well, I probably want to quarantine their posts and take a look later on because there's 99% spam. But if it's like, the user that was there for a very long time and they got just one flag, well maybe it's not bad content. Maybe someone is protesting against your opinion, by flagging it as inappropriate. This is also happening all the time. So that the first, the first thing would be adding a layer of an automatic content detection philosophers to find bad keywords, to find sentiment who has some context. Above everything, when you implement that proactive content detection solution, all of those steps are kind of unnecessary. The practical solution is to keep watching for violations that are against the conduct policy of the platform and they will flag it to the content moderators. This really helps companies of all sizes, whether they just got started or they have millions and hundreds of millions of users to proactively keep their platforms safe. Also, one of the issues for small bots, especially in niche platforms, is that the types of violations in many cases are not known. So once you pass the span and the bots, you get weird pieces of content that you don't know across many different languages. You're not sure what to do with it. It's like, is this bad, I don't know. It's like, is this terrorism, I'm not sure. Using your first proxy that saw that hundreds of times in many different platforms and can help you and can really apply AI in smart way to protect your platform is cheap to make sure that you're focusing on the right thing.

KATHRYN: As we kind of wrap up this conversation, we started this conversation with the history of online trust and safety. And so what's, what's the future, do you think? And what's the future for ActiveFence, but the wider industry of online trust and safety?

NOAM: I think we're just getting started. I was there very early on. So in the meeting with the cyber security industry and being a white hat hacker. Growing up, where there was nothing. Nothing out there besides one or two companies, and everything was so innocent and open. This is how it feels today. When I'm explaining people, what is trust safety and why companies are doing it, and "Hey, I messed up" and still doing it all by themselves. And saying actually 'no, there are no  other vendors.' and that's the pace that the internet is growing in, the pace that content is going, and the pace that we're seeing already gaming bypassing the television and music and film combined, and we're seeing social networks are getting so big. Everything online is just taking over. Trust and safety is the safety and protection, security there of that progress, the same way that cyber was for IT. I'm sure that you see a lot of small companies and a lot of VCs and a lot of organizations like the TSPA, and they're all at least 20 to 25 organizations that we're working with that are associated with working with trust and safety departments to help them figure out their policies and what they need to do in supporting them my way or another. And it's really, really exciting and I feel so fortunate and lucky to be out of it. My goal is to protect all of the online companies out there and to help them put [inaudible] and partners. Deep, deep down, I want to protect my daughter. While I don't want her to have any negative experience later on, I know I can't really control it and stop everything, but I'm going to do everything I possibly can.

We hope you enjoyed this conversation. If you're interested in learning more about the topics discussed in this episode or about our guests, visit our website at Now available: Garbo's new kind of online background check makes it easy to see if someone in your life has a history of causing harm, while balancing privacy and protection in the digital age. This episode was produced by Imani Nichols, with whisper and mutter. I'm Kathryn Kosmides and I look forward to having you join us for the next episode of Reckoning.

Other Podcasts

Get the Guide for Tips, Tools, and
Strategies to Stay Safe Online & IRL

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
No email address or other information required to download or access. Clicking this button will guide you to a PDF version of the ebook which you can choose to download or read on the browser.