EPISODE
5
51:31

Information Sensation with Tara Hairston

In this episode of Reckoning, Kathryn Kosmides speaks with Tara Hairston about information sensation and how it can be used to push important conversations forward. Tara is Senior Director, Technology, Innovation, and Mobility Policy at Alliance For Automotive Innovation which is the singular, authoritative and respected voice of the automotive industry. 


In this episode, Tara discusses:

  • How the information ecosystem encourages visceral responses to current events 
  • Deep fakes and doctored videos such as the Tom Cruise, President Obama, and Nancy Peliso fake videos
  • How the COVID lockdowns affected stalkerware patterns 
  • Technology-facilitated abuse beyond stalkerware
  • Anti-abuse thread modeling’s limits 
  • How we can pressure media, government, and tech companies to be accountable for technology-facilitated abuse

Welcome to Reckoning, a podcast that explores gender-based justice, safety, survival, and resilience in the digital age, through conversations with experts and advocates. I'm your host, Kathryn Kosmides the founder and CEO of Garbo, a tech non-profit building a new kind of online background check.

Kathryn: Tara Hariston has over 25 years of experience in advocacy, public policy strategy, coalition-building, and stakeholder engagement. Her career is focused on aligning government relations and public affairs strategies to advance business objectives, political intelligence, and reputation management. She is currently the Senior Director of Technology Innovation and Mobility Policy at the Alliance for Automotive Innovation, and previously worked as the Head of Government Relations in North America at Kaspersky. In 2019, she helped form the Coalition Against Stalkerware and in 2021 joined Garbo's Advocacy Council. Today, our conversation is focused on "Information Sensation", which describes how we can use sensation to drive attention and awareness to important topics.

Kathryn: Hello, hello, how are you today?

Tara: Doing well, Katherine, how are you?

Kathryn: Fantastic, really excited for today's conversation! We can dive right into it. So the theme of this episode is how information sensation can be used to push important conversations forward. Let's start out with what do we mean by information sensation?

Tara: Well, from my perspective, information sensation typically in the media or the information ecosystem is used to encourage this whole response to stories or increase people's engagement with certain content and promote virality. So, we have this kind of psychological or physiological context, which is how do we use our senses to process information and experience the physical world? And then in the information and media ecosystem, it's really focused on how do we promote engagement and visceral reactions to stories and promote their virality so that more people get engaged as well.

Kathryn: And so can you give us a few, maybe recent examples of major headlines that, while sensational actually did push really important conversations forward?

Tara: Yeah, so I think one of the areas where there's been kind of a sensationalist aspect, but also really pushing those conversations forward is the use of deepfake technology to be quite frank. We've seen some really well-done videos that really demonstrate how far this technology has come because editing video is not necessarily a new technical skill, but clearly with the advent of machine learning and artificial intelligence. And in some cases, adversarial artificial intelligence, we've seen some pretty significant advances in that technology. So I'm referring to videos like the Dr. Facebook video that was posted about House Speaker Nancy Pelosi, that was slowed down to such an extent that it made it seem like she was incapacitated or potentially inebriated, the deepfake video of Tom Cruise that was on TikTok that everyone was looking at, or even the, you know, going a few years back, the public safety announcement that was supposedly from President Obama. And then, even more recently, just this summer was the Anthony Bourdain movie called "Roadrunner", where they used artificial intelligence versions of his voice to kind of contribute to the film itself.

Kathryn: That one was especially like a little creepy and I think a lot of people had a problem with it, but it's something that people who are big believers in deepfake technology and the good side of it— we'll talk, about the bad side in a second— but the good side of it often say, oh, it can bring back historical characters and bring them to life and create these more engaging experiences. But, what do you think about that use case of it specifically?

Tara: Well, I don't know the specifics, but I did remember reading some stories that the filmmakers had not gotten permission from his family to use his voice in that way. So I think that while there's that potential because we saw this debate, a number of years ago with the holograms. Do we bring back these very infamous and famous music stars for concerts or to do their section of a song and how exciting that would be. I mean, I guess if there's support and clearance, if you will, from the family, then that's one thing. But I don't think we should mistake having that capability with the fact that it somehow replicates having the person back because obviously, that's not the case.

Kathryn: Mm, yes, yes. Now let's switch over to the reality of deepfakes, which the headlines don't often cover. So can you talk a little bit about the statistics of deepfakes? They're quite alarming once you know the reality of the situation.

Tara: Yeah, and it's really interesting because there's not clear statistics or at least none that I'm aware of, that document how often deep fakes are created or uploaded online, but there was a 2019 study done by a firm called Sensity. It was formerly called "Deeptrace" that concluded over 96% of online deepfakes were non-consensual pornographic images of women, and I've heard statistics as high as 98%. So I think one of the things that gets a bit lost in how quote on quote "cool" the technology, especially if it's used to bring back a deceased celebrity or some of these other really viral examples, is that it tends to not capture this large swath of when the technology is being used and it's being used without someone's consent, is being used to harm people. So everyone can get a little bit concerned about deepfakes in the electoral interference context, as we saw with the Pelosi video. And there's always a conversation around election time about whether contents' being manipulated and that's clearly an important discussion. But it tends to suck all the oxygen out of the room when discussing the fact that the vast majority of the ways that this technology is being used is already present on the internet is already harmful, is already non-consensual, but we don't talk about that broader swath of activity.

Kathryn: And how do we get media or just individuals to shift this narrative and talk more about the larger conversation and the reality on the danger of deepfakes and this technology?

Tara: Well, from my perspective, I think it's really trying to draw these connections. If someone reaches out to someone that is active on issues related to gender-based violence or violence against women and girls, or some variation of those issues on deepfake technology, hopefully, that would be the ideal situation to talk about this broader ecosystem of non-consensual images that are already online, that are being produced through deepdfake technology. But I think where it comes from is that whenever people start to really engage with the content these really sensational as deep fakes of celebrities, both living and deceased, maybe there's a way to say, "but did you happen to know that this technology is being used in another format?" This is not just something that's for kicks is not just something to show off how cool you are. It's not something to show off how technical you are. This is something that's being actively used to harm people. And I think that there would hopefully be opportunities to raise that issue, but I don't know that I've necessarily come up with a perfect answer of how to get media to engage because they're going after the sensationalist aspect and issues related to gender-based violence or domestic abuse or intimate partner violence, unfortunately, despite how prevalent they are, do not seem to get the type of attention that they absolutely need.

Kathryn: I do like that angle though, of using the conversations that are already being had to plug in those issues. If you're talking about deepfake technology, like our friend Adam Dodge, I think he was on CBS, this week. And I think being able to talk about that type of technology and then plug in and "hey, this is a women's and girl's issue majorly, which means it's an everyone issue", is the angle. So it's educating those folks who are talking about this technology who happened to be of a certain demographic, usually on that fact that the reality is that they are being weaponized against.

Tara: No, I absolutely agree. And I think what may be an aspect can be, is trying to proactively engage media. I think one of the realities of people that are in advocacy or in activism, or someone such as yourself, that's just incredibly busy with your own company. I think sometimes it's just obviously just so many other things going on that you don't think, "how can I try to do something proactive per se" because you're so busy trying to help the people that are being harmed, which is obviously a very understandable imperative. But I think what maybe some of us that maybe aren't so closely related to directly helping people that are affected by this to potentially explore is "how do we proactively talk to these reporters?" We kind of get a feel for these reporters covering these particular topics. So can we proactively reach out to them and say, "hey, we noticed you did this piece on deepfakes" and it may have been related to one of these viral videos that we're all familiar with and must be "but, did you know about these statistics and how it's being used to harm or how there's these apps that use similar technology that only works on denuding women" or that the "95, 96 plus percent of the non-consensual images online are of women that are being produced through deepfake technology." Maybe there's a need to kind of also proactively engage media and policymakers and others because I think if you can get more people talking about it, media will usually start to pay a bit more attention.

Kathryn: I love that idea of following up with reporters who were already on these beads, but might not know the reality of the situation or this lens or this angle of the conversation. That's great. And I think you kind of did something similar to that when you were a founding member of the Coalition Against Stalkerware. Stalkerware started being a big piece of the conversation and another hot topic recently. So let's take a step back there and tell the audience a little bit about what stalker is and how the narratives in the press have shifted, especially recently about this form of technology.

Tara: Yeah, certainly. So the Coalition Against Stalkerware was founded nearly two years ago. It focuses on, as you said, stalkerware, which is commercially available surveillance software that allows someone to essentially covertly or surreptitiously monitor the device of another person without their consent or knowledge. And it can do things like it can access your call logs. They can access your emails, your social media, or text messages, your geolocation data, your photos, it can turn on and off your camera. It can access deleted photos. So you can imagine in the context of image-based abuse, how that could be dangerous. You had some personal photos that you took with a former partner, you deleted them, you think they're gone, but some of this technical capability of some of these software programs can actually allow them to access photos that have been deleted. So, what makes this so pernicious is that the software, unlike some other forms of tracking software like rental, or even employee monitoring in certain instances or student proctoring software stalkerware will run in the background on a device. And there's no way for someone, just by looking at their device, by looking at their home screen, or looking at their list of programs, they will not instinctively know that this program is running. So that's what makes it so terrible. And that's why, you know, the Coalition Against Stalkerware was formed was to try to raise broad awareness of stalkerware, which is this emerging threat in online gender-based violence or technology-facilitated abuse. And so that people get to better understand that this stuff is out there.

Kathryn: And can you talk a little bit about the size of this problem? I know there were some recent studies that came out about the prevalence of this dangerous new form of tech-enabled abuse.

Tara: Yeah. And this is one of the things that the coalition is still trying to get its arms around, because of the fact that there are no global statistics, unfortunately on the scope or size of this problem. But as you just alluded to, there's been a number of, IT security vendors that have released reports on their detections of stalkerware or users of their products. So for example, there's a firm called The Avast, reported a 55% increase in stalkerware detection since the beginning of the COVID-19 pandemic. So since March of 2020, they've noticed a 55% increase Malwarebytes, another security firm that also happens to be a member of the coalition reported a 565% increase in monitor application detections and 1055% increase in spy applications in 2020. So one of the things that compounds the challenge is that every company is going to have visibility into detections of stalkerware that their products make, but that doesn't necessarily mean that they're going to have visibility and to stop our work globally, it's going to be within their user base. And then also, as you hopefully can observe with new reporting, The Avast, and the Malwarebytes statistics, sometimes it comes down to what is being defined in terms of the detection, so that there may be some things that say, one vendor may include as a monitoring or a stalking or spyware program that other companies do not. So there's the challenge of having that global visibility, but there's also the challenge of are we making apples to apples comparison? So that's something that the coalition is continuing to work through.

Kathryn: That's crazy. That's crazy. And I think we've just seen the proliferation of tech-enabled abuse explode during the pandemic because people are in close quarters with each other, right? So whether that's a domestic situation or family situation, whatever it may be is that they are no longer wanting to physically stop them because they're in the house together. So they're not going anywhere. And so now they're very concerned about what that person is doing on their device because it's likely that you're spending more time on your device because you're home so frequently not out in the world. And so this explosion of stalkerware where is just absolutely insane to me and really devastating because your device is your life, your device is your deepest thoughts. and when someone has access to those, it's insane, what they can do with that information, how they can weaponize that information against you, whether that is real threats of, "hey, I'm going to publish this picture that I found of you or gaslighting someone. All of a sudden, they know these details about your life that you're not sure that you told them, That's the dangers of stalkerware it's not just, "oh, well, do you have anything to hide?", which is something that you often hear and people rebutting this type of technology, but it's like, no, this is weaponized in such dangerous ways that become in-person, become offline to in-person. They're no longer just digital things. This is impacting your real life.

Tara: Absolutely. And I think it's really important to mention to anyone that's listening is that stalkerware is, like I said, it's an emerging threat, but it is not by any means the most significant threat that people face when there's technological abuse involved. Technology has been used since time immemorial to perpetrate domestic abuse or intimate partner violence or gender-based violence and stalkerware is just the latest example. And I mean, all those examples that you just cited, can be perpetrated by software, but they can also be perpetrated through other forms of either account compromise or device compromise. So for example, if you have shared passwords, with a former partner or that you didn't think to change, or even if you didn't share passwords, but you may not have necessarily have the strongest passwords or they just go into your account to do a password recovery, and they can guess the answer to your security questions. And this is not to kind of talk about putting the blame on some individual. This just kind of talks to you. The broader issue of why, even though stalkerware is software, it has technical capabilities. What, for one, for example, one of the reasons that a lot of the companies that participate in the coalition has some competency in this space is because stalkerware has a lot of functionality. That's very similar to malware or similar to spyware things that companies are very used to detecting and mitigating against. So they were able to leverage that in the context of stalkerware However, stalkerware is taking place in a more dated, very different situation. It is someone that knows you intimately and is someone that can either coerce or compile or can guess your passwords or your answers to security questions. So they can circumvent those traditional identity and authentication mechanisms that we all kind of rely on as part of our traditional cybersecurity. And that's what I think is one of the most important takeaways from this whole discussion is that people can focus on the technical aspects of the software and say, "okay, this is how you can mitigate." And that's clearly important, but what our companies and the coalition have had to learn and are continuing to have to engage with as a fact, is that this is someone that's in a traumatic situation. This is someone who's, as you said, your cell phone can be your life. I would be horrified if people had access to just the things I think about on a daily basis, whether it's looking up a song or it's looking at something related to, my physical health and my mental health or whatever happens to be, I would not want that information broadly shared. So it doesn't have to be anything that's like criminal or anything like that. It's just like, these are things that are very personal to me. And so when you think about it, through that wins a psychological harm that someone's experiencing, that you feel this being monitored all the time and don't know exactly how it's happening, the trauma that can impose, the fear of you don't feel safe anywhere, except in your own head. You can't share anything with anyone. You don't want to text anyone. You don't want to call anyone. You don't want to look things up. You'll want to look for resources because it feels safer to just stay self-contained. As we have also experienced to some degree through this pandemic being that closed off, but having to do it as a matter of survival can be so damaging. And so it's not as simple as just saying to a lot of folks, which, unfortunately, a lot of tech folks or cybersecurity people just say, "hey, just change your passwords", or, "hey, just get a new device" or "hey, just factory reset your device" and lose all of your contacts and all of your photos and all of these things that you have accumulated over the course of your life with that phone or that device, and just do these things. And you'll be fine. It's never that simple because this is a matter of power and control and someone using technology to exert that power and control. And that is a very different dynamic than what cybersecurity professionals tend to focus on, which relates to, hey, there's this third-party adversary, or there's this nation-state that is just happened to, your system happens to come up in the batch of things that they were targeting and that's how they gained access. No, this is someone that is using this technology specifically as a weapon against you and as a very different ball game than what we've seen traditionally kind of accounted for in cybersecurity or privacy by design, when it's a third party atmosphere, as opposed to an intimate partner or someone that knows you through an interpersonal context and not just someone that's a stranger,

Kathryn: You said so many great things there. And I think the biggest one that I like to dive a little further into is technology-facilitated abuse in general, right? I think the headlines are stalkerware, stalkerware, stalkerware, and it is dangerous and you don't know what's on your device. But like you mentioned, it can be as simple as them being able to guess your, security questionnaire or, in my situation, he had given himself access to my calendar without my knowledge. And I just happened to stumble into the settings months later months after I left the relationship. And he was showing up to where I was and adding people on LinkedIn that I had meetings with. And I was like, how does this person know all of this information? And it was because he had given himself access to my calendar. And so it's not just these niche, dangerous forms of technology that you have to know about or are hidden on your device. It's also how regular tech can be enabled, by, or be used by abusers to facilitate abuse that often goes, like we said, from digital to in person. And there's so many examples out there, I'm sure that, you know a few, not just the calendar, but, having access to Twitter, Gmail account, like all of these things. And I don't think people realize how much is actually saved within those, right? You think you're just Googling, no one has access to it, but someone can easily go in your history and find this information. It's definitely a larger conversation about just as you mentioned, how the internet and technology has always been weaponized as a form of abuse.

Tara: Absolutely. And I think that's really where the challenge comes in for folks that are trying to raise this awareness around technology-facilitated abuse, because it can be anything. And that's not, I'm not saying that in order to scare people, I'm just saying that don't focus solely on, stalkerware or these certain types of technology that do get a lot of the attention, because then you may run the risk of not locking down other parts of your digital footprint. That could be actually the vectors for harm. And I think it just speaks to the fact, again, that, you know, abuse is all about power and control. Someone is trying to use every means that they can, in order to exert that power and control and technology, because it is so pervasive and because it is so ubiquitous and because it becomes so frictionless, that you can have one log-in that gets you into multiple accounts or into multiple services. That is a risk factor. It's just the reality. And that's not just folks that might be finding themselves in an abusive situation. That's true for all of us. And so I think what it really kind of comes down to is something I was alluding to earlier that tech companies and security firms and those that are focused on protecting people and protecting their devices need to really think beyond those third-party adversaries, that they need to understand this isn't just about someone that's completely unknown to the survivor or the victim, but it can be somebody that knows them very intimately, really kind of accounting for these abusive personas to understand that how would someone try to abuse this technology? Anti-abuse testing or anti-abuse threat modeling is not something that's new in cybersecurity, but it typically deals with things that we're all familiar with, like spam or malicious content, or even sometimes online harassment, but it doesn't tend to extend all the way to, okay, we're designing this product and I know how I intend for it to be used, but what if someone uses it as intended, but they still use it to harm someone or they still use it to perpetuate abuse. A perfect example of this is the personal trackers

Kathryn: Yup. I was thinking just that was a headline a few months ago.

Tara: Absolutely. And obviously, there's a big announcement cause it was Apple and anything Apple does tends to get some attention, but I mean, Tiles has been around for ages and GPS tracking or Bluetooth trackers have been around as well. And so the question becomes, yeah, it is technology that was designed for a particular purpose. Hey, it helps you not lose things which, you know, sounds very benign. And a lot of people have found it very valuable, but then it's particularly after the Apple announcement, we started to hear anecdotally through a number of direct service organizations or shelters or charities that people were finding these devices in their car or in their purse after child hand off with a co-parent that was a former abusive partner. And so it's one of these things where the technology is being used as intended. It's being used to track something, but it's being used to track a person and being used to track that person without their knowledge. And so it requires us to take that step back and not just try to retrofit a solution, which is what we tend to do in tech, but it's like, how could we have thought through these issues more holistically before the product was even deployed? How could we have engaged folks like the national network in domestic violence here in the US and a number of the domestic violence groups that work at the state level, and there's a number of similar organizations around the world. How can we engage them more fully on the front end when we're talking about design? And we're talking about the intent behind this product to ensure that we were kind of mitigating as much as we could against the potential for abuse. Now, the reality is there's just like security is not something you can get perfect. You're not likely to get something that's completely abuse-proof, but you can walk through the exercise of consulting with these sorts of organizations as experts so that you can not only mitigate as much as you can, but you can also empower someone that may find themselves in an abusive situation to know how to better protect themselves and their digital footprint if they choose to use this device. Because I think one of the other narratives that unfortunately comes out when you're talking about gender-based violence or technology-facilitated abuse, is that, well, then the person just didn't use technology. And it's like, how realistic is that in 2021? We use technology for everything, to go to work, to go to school, to help our children do their homework, all these various things. So the answer should not be, just abide by these 10 cybersecurity practices and you'll be fine. It should not be, just get rid of your devices, and should not be you shouldn't be online. It should be how do we, as companies that develop this stuff better account for harm on the front end and do a better job of empowering people to protect themselves once the product is deployed, as opposed to putting the onus and the burden solely on someone who's finding themselves in these horrible situations.

Kathryn: And solely on the advocates, trying to put the bandaid on the bleeding wound and things like that. You know, it's so much of, wow, you're experiencing this abuse. What can we do in the moment rather than companies kind of building this into their solutions? And I know in the AirTag situation, they came back and developed a lot of safety features after the fact and rerelease them to try and mitigate this risk. But it's like, you want to just talk to an advocate, a survivor, a victim before you ever released this and realized the potential dangers of this technology. You could have mitigated so much of that risk to begin with. And I think it's a larger conversation of trust and safety teams at technology companies in general, like this is a brand new field that's just really happened in the last five, six years, maybe of trust and safety professionals being at organizations. And it used to only be on the financial abuse side, right. That was the only kind of trust and safety that existed much like cybersecurity was only modeling for third-party threats, trust and safety was only looking at financial abusers or ways in which the company could lose money, not how the end-user could actually be impacted.

Tara: Yeah. And I think what's really important here is that as you mention, it is an emerging field as an emerging discipline within the tech industry. And I think I would really, this is going to be a bit of a call to action, but I feel like tech and safety professionals really need to also continue to expand. Not only that more people need to get into this field, but also they need to expand the parameters of the work that they do. Because even though it has gone beyond financial abuse, it still is very much kind of focused on online harassment and content moderation related issues, and not necessarily at least by large, not necessarily okay, the technology that's being developed is intended to do this, but what happens if someone's using it as intended, but it can still create harm or what if someone's just going to be able to find a way to completely abuse the technology. And so the reason I think this is so important for trust and safety professionals to expand their understanding of these issues is because the advocates and the activists and the direct service organizations and those that work with survivors and victims are doing, literally God's work and they are so underfunded, they are so understaffed. And as much as they want to be a part of these conversations with tech companies and others about tech design and tech deployment, they just can't be in a million places at once. So to the extent that we can kind of build up this competency within the industry, so that hopefully it can kind of be a creative, if you will, to the kind of input and thought that an activist or an advocate will give when it's coming from within the industry. I think that serves two purposes. First of all, it gives us more resources to help with this problem. But also I think tech people like to talk to their own kind. So if it's someone that can not only talk about the technology but can also convey the importance of thinking through abusability issues when it comes to tech and trust beyond content moderation or online harassment. I think that my theory is that that would be fairly well-received because if someone's coming from inside the company, I know that's not always the case, and we've seen that in recent weeks with some of the whistleblowing and some of the big platforms, but I think it's still worthwhile because if you don't address these broader abusability issues, some of the stuff that's being done only kind of gets at the edges. It's not going to solve these bigger issues. It'll address some of it, but it won't address all of it.

Kathryn: And, a lot of this is like people wanting to know, is this a legal right? We get a lot of hits on our articles that we publish on our blog, is doxing illegal, is stalkerware illegal, and it's kind of difficult to have those conversations with victims and survivors that it's likely, maybe it is illegal, depending on what jurisdiction you are in, who did it, what type of stalkerware it was, all of these different, unique, tiny little pieces of the puzzle on the legality of what they experienced. But little to no actual repercussions on any of this tech-enabled abuse, which is, I think, devastating to survivors and anyone listening. That's the sad reality of the situation, but how do you think we can put pressure on the media and then on, politicians to make change here and to make sure that people who facilitate this abuse and the technology companies who also don't think about the abuse that it could facilitate, how do we hold them accountable?

Tara: Well, it's not a simple answer. Nothing ever is, but I think it really kind of comes down to the fact that there's a lot of awareness that needs to be done because perpetrating domestic abuse or engaging in database violence is illegal. And technology is just the means. It's not the entirety of the harm or the crime that may be being perpetrated. So I think what is, is trying to educate law enforcement and policymakers and judges and magistrates and all these different government officials to understand how the technology can be used for domestic abuse. Because something that heard over and over again through advocacy organizations or even directly from survivors is that when you go present this information to a police officer, they don't even believe that the technology has this capability and they just dismiss it out of hand because they think you're being overly paranoid. And this is true on a lot of issues as it comes to tech. Law doesn't tend to keep up very well. And the people that are responsible for either enforcing the law or implementing law, or potentially changing the law don't necessarily keep up on these issues either. So I really think it's important for people to get a fuller understanding of just how invasive this technology is. And that's not to say it's a bad thing. It's not inherently bad, but it can be used for bad. It can be used for harm. But I think until there is that connection in people's minds to really understand that technology does have these capabilities that is not just someone being paranoid. I think until that starts to shift, I think we're going to continue to run into resistance. And then hopefully once we've done enough awareness-raising about the capability and capacity of the technology, I think it comes down to quite frankly, getting law enforcement and criminal justice system actors and others to start enforcing the laws that are already on the books. Because we get into this conversation as to whether we need to have specific laws or specific types of technological abuse or things like that. And if we tried to do that, first of all, technology evolves so quickly, we would never be able to keep up. And secondly, it's likely we don't really need to do that because again, domestic violence is illegal or, if it's not, we've got a bigger problem. That's a different conversation. But I think out of the 190 plus countries, the world bank side, like 145 at least have laws against domestic abuse. So if that is the case, then it's more about understanding that these are facets of the abuse or the violence and not something that needs new law because the new law, I think in sometimes in some instances, just a stall tactic, and also in some instances, it alleviates people of their responsibility to enforce the laws that are already on the books. And so I think we need to get people to be better savvy and knowledgeable about how technology actually works and how it can be abused and then get people to enforce the laws we have and then maybe assess whether new laws are needed.

Kathryn: Completely, completely agree of you saying that this is all gender-based violence and gender-based violence is illegal. And so there doesn't need to be all of these semantics essentially of like— "oh, well this is not illegal, and this is illegal, and it had to have happened in the same jurisdiction and all of these rules,"— like you said, it just perpetuates the problem, honestly, like it's not helping anyone. It's not helping victims. It's not, helping survivors heal or get any sort of semblance of accountability for perpetrators of this violence. And so completely agreed. And we talked about two forms of sensationalized topics are getting this much needed, information out there in the media, but let's switch gears to another topic, which is revenge porn also known as image-based abuse or non-consensual image sharing. Can you talk a little bit about the history of this form of abuse and what the headlines have had to say about it?

Tara: Sure. Well, I mean, just as this has got to be a common theme, I probably said it a number of times already, but, nonconsensual images are also not new. Being able to dock the photos has been technology that has existed for quite some time. There's been recent reporting, for example, not in the non-consensual image perspective, but of historical photos being edited by certain governments in order to convey a certain narrative. So again, and that was, these are stories coming out of the thirties and fifties. So this is not new technology. I think what is new is that because technology has become so ubiquitous and because people are so interested in it, documenting their lives and sharing their lives and photographs are a great way to do that. There's this real accessibility to being able to access these images that people share from their lives. And as a result, people are using technology to modify those images and not usually for the better. So unfortunately we are seeing that a lot of photos kind of are being doctored and it's being kind of portrayed as this victim was crying because it's like, well, this person's already on the internet. So, and they're sharing photos and I don't know them. They don't know me. It's a digital crime if even as a crime and therefore it's not harmful, but I think it's really important for us to keep in mind that just because things happen on the internet and it's not something that's happening to you physically harm is harm, as our friend Adam Dodge says. And I think it's really important because in these instances, most of the time a person doesn't know that their image has been manipulated in this way. So it started kind of with celebrities, people thinking not only is this someone that's famous and they very much have a strong online presence, you kind of feel like you have the sense of ownership of their image and of their personality. And a lot of times you start to see them kind of edit it into, pornographic videos. And so we saw a lot of that probably in the early two-thousands. And now because of technology has become again, so ubiquitous, you're starting to see this broader dissemination of technology that can allow you to manipulate the image of anyone, not just a celebrity. and like I said, those manipulations are, those edits are usually not for good purposes.

Kathryn: And I'm glad you brought up the early celebrities, early two-thousands timeframe because they think when we reflect on that era of like Kim Kardashians and Paris Hilton sex tapes, right, these were seemingly like, I don't want to use the word like fun or like engaging content that a lot of people engaged with thinking that they had that. Right. But at the end of the day, that was non-consensual image-based abuse. Like they did not release these images or these videos with their consent. It was someone else releasing this content, an intimate partner, usually releasing this content without their consent. And they were destroyed in the media and these women were able to make lemonade out of lemons. But so many women who experienced this type of violence, this form of gender-based violence cannot do that. Right. Do not have the privilege and the power to do that. So how has that narrative, has it switched or is it still very much derogatory towards women and victim-blaming and shaming, or is there a shift in it happening to where the media is holding the perpetrators accountable for spreading that information rather than the person who took those images, trusting someone with them?

Tara: I mean, I think it's still a very mixed bag. because I mean, if you can think back how the conversation around consent just in general has evolved from that time period of the early two-thousands til now, and we still don't have, I won't say we don't have clarity, but we still don't have necessarily consensus that we are all kind of playing by the same rules, unfortunately. So I think there's still a lot of victim-blaming. I think there's still a lot of, for lack of a better way to say it, slut-shaming that happens. I think because it's this societal impulse that, "well, yeah, he may have,"— well, I shouldn't say that, but I'm honestly, it happens to be a male may have been wrong for releasing the video, but she should never have taken it, to begin with. And so it's always kind of this "but and kind" of response. That's really, I've got to say very infuriating. and I think it's because going back to what we were discussing earlier, that when we're talking about deepfakes we're still so focused on disinformation and electoral interference in high profile individuals. We're not seeing this broader picture of other people being harmed in the similar way to what the celebrities experienced in the early two-thousands and continue to experience today. And I think the technology because it's become increasingly ubiquitous and it's become very easy to kind of just point and click and do something, manipulate an image very simply, you're putting that technology within the hands of more and more people. So it's like, there's greater democratization sounds good on theory, but when the democratization gives it to more people that you can use it, more people are using it to harm. I don't think it's necessarily a great trade-off. And I think it just kind of talks to also failing, to kind of understand what the online culture for younger people tends to be, where there is a lot of pressure that younger women and younger girls are experiencing to provide sexualized images of themselves. Either in the context of a romantic relationship or sometimes even not. And there's been some statistics that have said women will share images because it allows them to divert the conversation from being pressured to have intercourse or have sex with someone. Men will send images in order to kind of solicit the sex. Women will send it to forestall the sex. And I think that, so when you kind of look at these broader psychological and societal issues that are happening, even within the exchange between two people, that's happening on this thing called the internet, a lot more conversation needs to be had about what those dynamics are. And I think it's also really important that people hold each other accountable, but also recognizing that if someone wants to share an image with a partner, they should be able to do that. But that partner should understand that that image is only for their consumption. If they happen to break up, then they should delete the image, or they should respect the privacy of the person that they were previously with. But sexualized images of women are so discounted as just so pervasive. And they're not entitled to being able to hold people accountable and entitled to privacy, that it just becomes another form of content. And as a result, people don't treat it with the same deference and respect that it should. And I think, unfortunately, that attitude tends to contribute to the problem.

Kathryn: And bringing teens, young girls, especially into this conversation, I think is hyper-critical. We saw Francis Haugen talk about Instagram statistics about, suicide and eating disorders. And it made me reflect upon my experiences as a young teen on the early internet, right? There were no trust and safety teams. There were no report buttons or hotlines or these types of things. And there were dozens of predators. I think that I ran into this in my early internet experiences. And I'm actually just starting to explore that in a piece right now about how dangerous the internet is and how there's so much online grooming. And I'm so glad and sad that you brought up how women use this as a stalling tactic. It's a defense mechanism, almost in a way. And so much of this is how we are forced to respond to threats from individuals of various ages.

Tara: No, I just think it's really important that we think through how early this socialization of young girls to engage in sexualizing their bodies happens. I think it's really hard for a lot of people to believe, that it happened so early, but like, I am much older than you. And I remember kids talking about sex at a very young age. They didn't understand what they were talking about, but like sex enters the conversation so early, and we didn't have any of this technology. This was pre-internet. So I can only imagine when you've got Instagram and all the social media and, you know, boys thinking, well, my guy friends are asking me if like I've gotten a new picture yet. And you're feeling this pressure to ask and to compel or to like, coerce in some ways, verbally coerce someone to provide that image. And then girls having to feel that both from their friends or from society are like, oh, well, all these other people are they're scantily dressed or nude or even sometimes it could be this very interesting, take away from these kinds of body positivity movements where it's like, I'm not comfortable with my body, which is great, and that's important, but you have to understand that's not the way that it's being consumed or it's being received. Not that that should be your responsibility, but the reality is being body positive and showing your skin is not always just about being body positive. It's never going to always just be consumed that way. And this gets really uncomfortable because girls have to deal with so much. And so it's so hard to kind of figure out what are the lines to walk or what things, what messages that are supposed to be empowering. I can listen to you, what can I not listen to? But my recommendation always is if someone's like, "I'm going to do the silhouette challenge", but I was going to insert a certain filter, but I'm not gonna have anything underneath it, any clothes on. And then, of course, someone developed a technology to be able to remove the filter so they could see if you were naked or wearing a bikini or whatever. So it's always like, how do we think through what are the potential implications of some of the things that we do online? And I think the challenge has happens to be is that again, it goes back to the onus is always put on girls to be on the defensive. It's always on them to not just ever be able to just kind of exist and enjoy and feel that sense of freedom sometimes online because it always seems like there's something out there that can harm them. And I don't have an answer to that at all, unfortunately. I wish I did because I do unfortunately just see it as kind of a perpetuation of how society treats women and girls more generally and online is just an aspect of that.

Kathryn: It's so daunting the world that young girls have to face online. And I only hope that they're listening to these types of conversations, engaging in material that can really help them and hopefully proactively prevent some of these experiences from happening to them. And then if they do engage in these things, right, they do take the pictures, they do these challenges, and then these things are weaponized against them that they have the resources available to them to know what to do after the fact. So can you talk a little bit about resources that are available if someone does experience nonconsensual image-based abuse or a deepfake of them is created, what can they do to help the situation or help themselves heal from the situation even?

Tara: Well, I think the biggest thing is to seek out support. It is so easy for a variety of reasons, but particularly if you find yourself in this sort of situation to blame yourself because people will make you feel responsible for what happened. That is unfortunately a typical dynamic. Our society has when anything relates to gender-based violence is that you are at fault. And I think what's really important for young women and girls to understand is that you are not at fault. the person that betrayed your trust, the person that shared those images without your consent is the problem you are not. And so being able to, and I know that's obviously easier said than done, but I think it's just such an important message that we have to reiterate at every opportunity. In terms of resources, I think it's looking to your local domestic violence program potentially to see if they can put you in touch with some local resources that could be of assistance. There is this organization called the cyber civil rights initiative that focuses almost exclusively on nonconsensual images. they have a hotline which is 8 4 4 8 7 8 2 2 7 4, and they provide, information guidance to how you have to document images and work to take down the images, how to be referred to attorneys as well as providing emotional support. Because this gets into another topic that where we do seem to have a broader societal consensus, but the sharing of images, particularly young girls could dovetail into the territory of child sexual abuse material. And fortunately, like I said, we have a broader societal consensus around the fact that is harmful and that's something that needs to be addressed. So you could be technically an adult, you could be over 18, but at the images were taken before you were 18, you could be considered a minor. And therefore there may be another aspect of enforcement that needs to be considered. So I think that reaching out to the Cyber Civil Rights Initiative, reaching out to a local domestic violence program, reaching out online, unfortunately, because this is so pervasive and ubiquitous, there are communities online of people that have experienced this that can share with you what they've gone through and methods that they use to try and protect themselves. But this is, unfortunately, one of the things that happen with the internet, something is on the internet. It doesn't go away. but that doesn't mean that you shouldn't try because it's so important to take that power back, to the extent that you can, to the extent that you're able to do that with support, hopefully from friends and family as well, but particularly from organizations that have dedicated their resources and their time to these sorts of issues.

Kathryn: Thank you so much for being with us today. Your work continues to personally inspire me. And I'm just so glad we were able to discuss how information, when it's sensationalized or even if it's sensationalized can be used to foster really important conversations.

Tara: No, thank you so much for having me. Kathryn has been a pleasure to be here. I'm so glad to have connected with you a little bit under two years ago and continue to work with you going forward.

Kathryn: Of course, of course.

Reckoning is a podcast produced by Garbo, a tech non-profit building a new kind of online background check. Our executive producer is Imani Nichols with whisper and mutter. Please subscribe to the show via your favorite podcast app. And as always, please send your questions and comments to hello@garbo.io

Garbo is building a new kind of background check for the digital age.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.