Skip to main content

tv   Campaign 2024 Discussion on Elections in the Age of AI  CSPAN  April 29, 2024 1:06pm-2:00pm EDT

1:06 pm
and wildfire detection. later this week, the lower chamber could take up bipartisan legislation that would codify a definition of anti-semitism to apply to federally funded education programs. we'll have live coverage of the house when members return here on c-span. >> c-span is your unfiltered view of government. we're funded by these television companies and more. including mediacom. >> at mediacom, we believe that whether you live here or right here or way out in the middle of anywhere, you should have access to fast, reliable internet. that's why we're leading the way and taking you to 10-g. >> mediacom supports c-span as a public service, along with these other television providers. giving you a front row seat to democracy. >> next, a discussion on artificial intelligence and the potential risks it poses for voters, campaigns and election
1:07 pm
officials during the 2024 campaign season. panelists also highlight the importance of state regulations on a.i. and local elections. this event is hosted by the bipartisan policy center. the bipartisan policy center. >> welcome to the bipartisan policy center and thank you for joining us for tonight's event. i'm the executive director of the democracy program here at the bipartisan policy center. we are pleased to bring you this event on artificial intelligence and the 2024 election, in partnership with harris school of public policy at the university of chicago. we believe that working together we can forge durable, bipartisan solutions that benefit america and americans. one of our institutional goals is strengthening our democracy and its institutions, which is why we have devoted much of the
1:08 pm
past 10 years to election and voting policy. every election cycle comes with its own unique developments and 2024 is no exception. the widespread and growing accessibility of generative artificial intelligence is already shaping campaigns and voting. in january, for example, we saw deepfake robo calls purporting to be from president biden sent to new hampshire voters with a message discouraging them from voting. public discourse and media coverage have understandably focused on the potential threats and challenges to democracy. foreign and domestic actors have much easier access to tools that can create convincing deep fakes and amplify misinformation about how and when to vote, to convict candidates in compromising situations and to fuel conspiracies about vogt irregularity, just to name a few. we cannot discount the benes of ai.
1:09 pm
technology has for decades fueled innovation, campaigns, voting, and candidates. this comes at a time of heightened tensions. it poses real challenges. they have been underfunded for decades. and under threat. i think you are here to hear about solutions. so that me interview the panel. he is a professor at the university of chicago school of public policy. > welcome. thank you so much. thank you for having me.
1:10 pm
we pride ourselves on being outside of washington. so we can take the time and space to be analytic. we are working very deeply and tried to affect policy change. to have this exchange back in or. to do the hard work. we are really happy to be doing this. let me quickly introduce them before we get into our conversation. he is the director of the elections project. he has also worked on technology issues for senator elizabeth warren. in research on related issues.
1:11 pm
she is the director of technology policy at american progress. she spent the majority of her career promoting american progress. she also served for in the department of the treasury. he is the senior research fellow at the center for growth and opportunity. he guided efforts to enable
1:12 pm
people to approve. he has worked as an attorney. great to have you. we will jump right into talking about this. we can try to talk about some of the problems. and some of the blue -- solutions. i think it is worth noting that as we start doing so, it is important to note that we will be facing an important election this year. there will be many threats. we will be grappling with a set of issues.
1:13 pm
let's start with election risks. tell me how you think about what the set of risks we are facing our. we can get into discussions. on this includes things like cybersecurity and the like. i want to unpack that a little bit and hear from you on what are the threats we should be thinking about? and then we will turn to some of the solutions. where do you want to get started? >> i am glad you mentioned that. we can get into first-order versus second-order. there are so many things to talk
1:14 pm
about. so many ways to order the discussion. you can think about the kind of risk brought up in the introductory remarks. other kinds of content produced by actors who are seeking to do that. malicious actors. that is a whole set of risks that we could spend the entire hour discussing. there are also reasons to think that we as americans are resilient to this stuff. that is one group of people, malicious actors. the second group's average voters using ai to get information on how to vote. this is something people have concerns about. there is good reason to think that this is not a huge risk. voters are interested in receiving information.
1:15 pm
we did a report of a couple of weeks ago on elections. getting information from chatbot's ranks really low. i think it was last below i don't know where i get my information. it is something voters are not interested in, for now. they are popping up. there -- they are inescapable. they're becoming more integrated. another reason not to be concerned is that tech companies are aware they can hallucinate. they tried to do things to redirect voters to authoritative sources. . that will not be so bad. >> thank you for having me.
1:16 pm
my take on this is like with any technology, ai makes good things better and that things worse. what i think about elections in the way we consume and receive and generate information, it is already to -- pretty fragile. with the advent of generative ai , i see that problem exacerbated. i don't think ai will be the reason democracy ends are falls apart. the reality is it drops the barrier to creating and disseminating information that is problematic in nature. the dissemination of information falls to the platform.
1:17 pm
people and not sharing on openai the way they are on instagram or snapchat or other places. the existing problems are more pertinent than ever. they are not heightened because of generative ai. the only caveat i will say is the thing that keeps me up at night is not anything about joe biden or donald trump. it is the down ballot considerations. in such a massive election with all 50 states heading to the polls at local levels, there is somebody microcosms of threat that exists where the digital literacy around spotting and reporting on a deepfake of joe biden will be less likely. it is really incumbent on those
1:18 pm
date holders, the state authorities, to build that literacy. while i am a former employee and i work on things like authoritative sources, i know it is barely scratching the surface. >> that is not a risk. one of the early discussions was cybersecurity would be a big challenge. ai glitzy build code. i think there was a lot of early talk about whether this would present a cybersecurity risk to elections. my discussions with experts, they are not that worried about it. it turns out that the u.s. has a
1:19 pm
robust, redundant, so distributed that it is chaotic election system. it is very hard to capture in a way that you can manipulate the votes. even with a bunch of vulnerabilities. that will not be the concern it was initially. there could be other cybersecurity threats. but it doesn't seem like this is a particular rest. the biggest risk i see is we still do not trust each other very much. i think the concerns around misinformation are where people are not focused. one of the things i worry about in this space is we are changing the outcome of an election. by manipulating what people seattle national level.
1:20 pm
that might be another thing. if it was possible to do, the hundreds of millions of dollars like going to getting voters to change their minds would be much more predictable. we don't really know how to do that well. one thing that is easier to do is to make people less certain about the validity of the outcome. we are already at a place you mentioned previously where the american public is worried any time an election happens about whether or not there guy got a fair shake. a lot of the discussion around ai, while maybe not actually contributing to that, is another narrative. that people can grab onto and say this is another reason why my guy did not get a fair shake.
1:21 pm
it is really up to us to educate the public. so that they talk about this issue in a way that is balanced and addresses concerns people might have but does not heighten the concern so people automatically turn to ai as a reason the election did not go well if it does not go the reason they want. that is where i am most worried. whether or not ai actually changes a person's vote. >> there is a question if you think about 2020, one of the positive stories for american democracy is the extent to which the professionalism of the election administration really held up.
1:22 pm
secretaries of state, county election administrators, really behaving in extremely professional, technocratic manner, even when facing a lot of difficulty. it is really a story of a triumph of american democracy. what we are thinking about this year is should we we be worried about ai? i can give you a few scenarios. if the election does not go the way you want and there is this narrative about ai, and we undermine faith in elections through blaming ai, even if that is not true. another scenario would be if you think about the famous video of a poll worker innocently moving some ballots from year to their
1:23 pm
aid arizona that became a very big story. you can imagine a deepfake story of that. you really get the use of ai for an attack. undermining faith. you can imagine other types of scenarios. you should show up. it is the set of attacks on election administration broad enough that we should be talking about policy responses or education responses now? one of the set of responses about? that should be most concerning to us? >> my biggest concern is what is generative ai ring to this? it would take a very detailed photo to stir up a giant
1:24 pm
controversy. will attackers spend a bunch figuring out a tool to do this? or will they just take a blurry photo and spend all of their time and money distributing that rumor? rather than generating the content? i don't think we know exactly how that shakes out. it is distribution that matters. the chokepoint for misinformation is not really the generation of it. it is the distribution of it. that is where people will still spend a lot of's -- of time. we should think about what is the margin before we decide to plunge in and do something very specific to ai rather than say more general about deceptive attempts to intimidate voters or
1:25 pm
distract them or directs them somewhere else or threaten election officials regardless of what technology you use. that is how i think about it. >> all of the examples you cited , there are low-tech equivalents of than that would be just as effective. that election officials have seen in the past. it does not change the picture that much necessarily. it will require officials to double down. they have had so much more put on their plate in terms of responsibilities. being prepared to counter some kind of false narrative. i think a lot of the solutions
1:26 pm
are things that election officials have been thinking about for a while and getting good at. it is hard to say what kind of narrative we will face this year. >> i think the problem of scale is really relevant. the platforms have been dealing with these challenges. it is an existing problem. when you have any of these photo or video creation tools, there will be more of this content out there. in a situation where you can't tackle the problem head on with enough resources to do it fully and you add additional tools to create and generate this, who is to say what the reality will look like? it could be awful or it could just be a drop in the bucket of an existing issue. >> i think that also gets to
1:27 pm
your point on the effect of smaller jurisdictions. when we talk about a biden deepfake, obviously that will be debunked and reported pretty widely and hopefully that would make it to voters. at the lower level, where there is not a local newspaper that is actually able to pick up the narrative, i do agree with you. that will be something more to watch. >> when voters are heading to the polls the day of. >> it is interesting to think about if we start turning out attention to regulation. to what extent is this just politics as usual?
1:28 pm
there has been dirty politics since time immemorial. it is more of the same. another possibility is that the scale really matters. as opposed to what we have been focusing on. maybe this floods the zone. more than in the past. i don't know. as we think about the set of problems, do we have a set of working regulatory or policy ideas that feel up to the moment? we have seen states attempt to do some regulation. we are having conversations about regulation. where is the regulatory conversation or the policy conversation making progress? >> another thing we need to think about is the parties involved.
1:29 pm
incumbents traditionally worry about the technology. especially ones that push the capabilities that help people at a lower scale be able to produce content at a more professional level. that can help explain some of the concern on generative ai. it is the type of tool that they have been spending tons of money to create content that looks like that. now a competitor can create sophisticated content for a low-cost periods i think we have to keep that in mind when we are thinking about policy. regulation around speech about elections runs right into the first amendment. you always have to keep that in mind. that is why a lot of the most
1:30 pm
targeted tools in this space will be focused on the types of things that are conduct aimed at the electoral process. and not aimed at the discussion around the election. when you are doing things about lying to people about where the polling place is or threatening or harassing election officials in an automated way, those are the types of things that we should have solutions for. i do know that they need to be ai specific. we have laws around much of that. maybe we need to strengthen those laws. to address any challenges. >> i would take a big picture approach and say there is some tech regulation.
1:31 pm
i am of the beliefs that it is a failure of this administration if that is the only thing they are regulating, a singular platform. -- there is a strong need for protecting kids online. you can argue how to go about those things, but my general thought and that of the folks in the states is that there is a need for regulation in the online ecosystem. we see it with the fda, we have seen a failure on the platform effectively. with all of that, not withstanding generative ai, it is clear there is a need for more at d.c. and the states. the states have stepped up in a lot of ways. california has raised the foundation for how they build privacy in their systems, and with judy p.r., it is baked into
1:32 pm
what they're doing now, but even with generative ai, and michigan, governor there passed a law around the closure of generative ai the head the election, so you can see the state starting to create the approaches for themselves that they think is a problem, and some have pros and others have cons. they are fresh off regulation, but given all of that in the moment that we are in it now, there is a strong need for it, for baseline regulations, but especially for ensuring that we can harness the opportunities of ai with balancing the risks, but the caveat is, as said, there is a lot already in their that does not apply to ai, and really excited that we are undertaking a body of work to see how does ai apply for the department of
1:33 pm
education or hhs and other organizations?
1:34 pm
so that is a good starting place for any sort of targeted platform, which is what we have right now. >> it is about the harms and risks that we are worried about rather than the technology itself. we already do have so many tools to watch, like you referred to. in lots of states, the manner of elections is already illegal, whether using generative ai or crayon. so some of the things the state are doing is pretty interesting around transparency, so the states like michigan have passed laws that lawmakers have to figure out. >> if i was advising a candidate, it would be hard for me to say do not put this disclaimer on your content because i think it would be pretty hard right now to produce a video that did not use some sort of learning algorithm. i mean the chips and cameras now essentially have invented machine learning algorithms that sharpen a photo, so does that
1:35 pm
count as ai? i don't know if you saw that theo with the iphone 15, where the bride had like an arm in the wrong place or three arms, and it was because of the generative algorithm that is built into the phone as it takes a picture, so with that count as generative ai and would you have to put this disclaimer? i think the safe bet were to be putting that disclaimer everywhere, which is useful. i think there are practical challenges. they are hard to nail down. that is why eight text specific approach. >> if i think about the story about white incumbents may the post technology, if any change creates risk, you're going to
1:36 pm
win as things were, or do i change anything? and i would like to think of positive use effects of generative ai in the election space. the story told about why they oppose regulation is a good story for electoral democracy, especially for funded candidates, challengers who haven't come who cannot generate rational looking campaign, and generate ads almost for free. if you look at the data from the last couple of elections on who is using facebook ads or political targeted ads, they are taken up by challengers so is this a good story for ai and
1:37 pm
elections? are we headed toward the golden age of electoral competition? are there other stories about ai and not all doom and gloom, but coming out of these elections for us? >> good or bad, maybe the incumbents are good, right? there are some technologies that could make the world more complicated. you look at the social media and the story and politics in the story for the president obama election that was very positive, and the story about how trump used it was negative, and i think what you will see is a lot more variability in the ability of candidates to get traction who do not have traditional backgrounds. i think that could be challenging story for democracy but it depends in part on the candidates. i trust the american people
1:38 pm
enough to think that we will figure this thing out. we have a system in place for not clearly maximum votes getting you across the line, so it does take a lot of scale and the ability to build a coalition. i think some of that protects us from the most extreme types of variance, and ai does not really change that too much. i think there are other positive stories about use of positive communications tools, but i have been talking too much. >> 20 think about upsides, i think about election officials as part of the workforce, the civic workforce that is being asked to do a lot. with not earning much funding. elections are carried out in the u.s. by 10,000 jurisdictions. many of them, the elections are carried out by one person. one person who is maybe part-time.
1:39 pm
maybe they have one i.t. staff or zero depending on the size. they have a lot to do. i think opportunities there that ai can be used could also be used in terms of doing more with less, which we are always asking them to do, which what we should actually do is find them at higher levels. there are things that election officials have to do that can be streamlined by use of ai. they also have generate a lot of memos and things, just like any campaign would. they have cereals who need proofing and things like that. there is kind of a set of activities that they can do that ai could potentially help with. some election officials are experimenting with it in these kinds of areas, but it is early days. there is not official guidance
1:40 pm
from anybody on how to do this kind of safely and responsibly. i think we will get there eventually. eventually, it will be incorporated into election official workflows just like it will be for all sorts of workers. i was curious, raise your hand if you have tried using generative ai to do something related to your job? it is a little over half. this goes for most different offices and election officials, so we have to figure out how to put safeguards on it. with elections, a lot of that involves making sure that no matter what you do, there is human review at the end of the process, and that is just built in to the dna of election officials, like a part of the process. >> i would add another positive, which is the flipside of the coin i spoke about earlier about the ease and the reduction of a hurdle to create misinformation
1:41 pm
and disinformation and video, texts and audio, which is civic participation becomes easier for people who would like to write a letter to their congressperson or engage on an issue and are not sure how and need a head start. those of you who raised your hand probably have used chat gtp to get started on something. you have an idea, and it gives you information and fleshes out that idea into something more. you can think about that in the civic context, i care about this issue and would like to write to my congressman or mayer but don't know how to do that, and that is a clear-cut example of what real democracy space that carries all the risk facebook about, but it is a real opportunity. >> maybe you can help us think about this side, so let's think
1:42 pm
a little bit about we have seen a variety of different approaches to these issues over the last decade or so. i'm interested -- and that is when we are seeing twitter going the opposite direction from where everybody else has gone the last decade, so i'm interested in your take on what kinds of self-regulatory strategies have been tried in earnest, and did they do anything? >> [laughter] great question. so, in my time at meta and twitter, and i will say that i was fired the first day that elon took over the company, a crowd badge i wear every day, yeah, actually here in d.c.
1:43 pm
so i was on a team that was called product policy, and essentially advocated for all the things we're talking about here like content moderation, safety, transparency, accountability, in rooms full of people filtering the stuff, so designers, engineers, and i was often the only person in a room full of these people advocating for the issues that i cared about. these are things like growth and engagement and time spent, stock price, the bottom line, and things institutions naturally care about without external push to relate prioritize these things. ultimately, these incentives were never prioritized by the companies because they did not have to be. to this day, without appropriate external pressure, regulation or
1:44 pm
forcing function, i don't believe they will be. that is partly why i made the transition and left the techworld to come to civil society and have the conversation and be straight up when i talk to people at the company and kind of speak the language they speak and where the hats that i have worn in the past in these conversations and say, you know, i know you guys care about these issues. you are working tirelessly, but at the end of the day, these incentives get the prioritized by things like growing into keeping users on the surfaces. and keeping the lights on. without that external pressure, we will just continue to see a failure to self regulate. it is like a lot of crises have occurred and have implicated the platforms in severe ways that have led to loss of life,
1:45 pm
violence, moral issues. we are at an inflection point as a society with the industry where it is really incumbent that all of us in this room and d.c., the states, and around the world, honestly, need to figure out how to wrap our arms around it. >> when you were there, i'm curious how you made it to civil society -- welcome -- >> thank you. >> a lot of tech focused organizations are always trying to make recommendations on what they things co -- what they think company should do. did you find that people considered that within the companies? >> yes. there are well-intentioned teams internally spend all day talking to organizations, and when i got to cap, one of the first things i did was right one of those reports.
1:46 pm
here is how you protect elections. this came out in august, and eroded in the style of how i would write internally. whether or not it had impact remains to be seen, but i do know that after meetings with organizations, there were well-intentioned people who bring those recommendations to the people designing the system and building the products. timidly, that is what i said, it is a for-profit institution that will do what it needs to do to generate profits, and if it is not on the list, they will not prioritize those things. >> content moderation is already difficult, difficult problem. political content moderation can get you in a lot of trouble. even if you do it perfectly, 99.99% of the time, you get a few examples and people make a
1:47 pm
big deal out of it, and it gets you hauled in front of congress. i think there are strong incentives to get it right. but it is a difficult problem. getting it right is difficult depending on where you sit politically. all the recommendations in the world do not make the problem easier. around performing correctly around elections, one would hope there would be bipartisan agreement on what running a good election looks like. i'm not even sure that is 100% true but i think it's challenging. we are not talking about generative ai companies but social media companies. self-regulation in the generative ai space is something different. in fact, you can think of the reinforcement through human feedback as essentially a form of self-regulation. a lot of work is going into that space.
1:48 pm
how are they doing it for election related materials? it is not clear. i think they are trying hard to tell people that these are imagination machines, but then they have marketing that says something different. i do think that it is a complicated problem for them to solve. again, i think the real problem is a distribution, not content generation. maybe that is the right place to talk about self regulation in the space. i did want to talk about that we are not really talking about self-regulation. >> i would like to come back to self-regulation on the first side, on the generation side. i would like to stick to distribution for a minute. they are interwoven, and distribution is a complement, so content moderation is really hard to do right. nobody agrees what is right because policy is a zero-sum and
1:49 pm
the thing good for one team is fine for the other. i tend to agree that content moderation leads to political because every piece of content to take off by pleasing one-sided making the other side mad, the other thing, if you compare today to 2016 or 2020, and here again, i think twitter is really outlier. if you look at tiktok, sudan, whatever, just the mass of turning down the volume -- tiktok, instagram, whatever, just the mass of turning down the volume, we are just seeing less politics on the platforms that we did half a decade ago or etiquette echo. i'm interested in your thought, is it good, bad? it is a kind of self regulation
1:50 pm
that has taken them amount of the politics. >> i will add that most platforms said they will not directly run political ads. it is cutting into their pocketbook. but that is not a good solution. for the same reasons you said, these targeted ads are the way startup, encumbrance, or people challenging incumbents can get traction and build a campaign. if you remove that factor, i think that's bad for democracies. you can see the appeal of it. they don't make good decisions anymore and they just say, here we don't do that here anymore. it is a form of self-regulation but i think a pretty bad one. >> and 2017 to 2020, the
1:51 pm
pendulum was slung on the post 2016 side of things, and now twitter is the outlier. they did not do global ads last time and they are doing political ads this time. the pendulum has now swung to the texas and florida cases, and missouri b morsi, and the relationship between technology and the government, and the public is at an all-time inflection point to where the problem of scale is so severe and the realization that they cannot effectively moderate political speech and advertising in a way of -- forget keeping everybody happy because that is zero-sum, but perhaps keeping the space healthy is completely impossible, so the solution is to have political content with machine learning models trained on every social issue that exists. it is quite imperfect when you
1:52 pm
think about the scale at which there are rating area even though that is the approach they have taken, you will still see spy reality across certain content that will still become issues to companies. it is one approach, not necessarily the one i agree with for this course and living in a healthy democracy, but that is what they have turned this into. >> i'm looking back at my previous testimony. as a the date of this, i think this is still true, if you do have synthetic content and you disclose it, i don't know how they would police that effectively, but if anybody can, they could. that is a different approach,
1:53 pm
and their definitions are much tighter than someone legislator language around what is ai? >> they were aware earlier that it was done with ai. >> exactly. >> so speaking of self-regulation on the ai side for ads, and some of these are social media companies, so openai has seeking a stand, if you cannot use their tools for political content and other companies have been less district about that, and there are open-source monster can put on your laptop and you don't have to follow their rules at all. is that content generation side self regulation doing anything? or is the genie out of the bottle and there is nothing that can be dented this point? >> i do think that the most
1:54 pm
effective self-regulation is the training of the model and the layers they put above it to check. it does not work so well if you are downloading it to your laptop, but if you are interactive with chat gtp or something like that, there are layers of checks to see if the results coming out has something to do with something that might be a little squeamish. that doesn't always go right either because sometimes that blows up into a pr problem, but that is the approach. there has also been sort of collaboration -- collaboration might not be the right word -- but agreements, some driven by the white house, some driven by the u.k., around uses and procedures that they might've dropped so ward off worst uses.
1:55 pm
i think the practical effects of those, it's too early to know and to hard to believe some of this. i have very little fear of somebody using something like chat gtp to crank out tons and tons of spam, lies, and emails because they can see everything that you are asking us, and they are watching. i worry less about that, but people are going to do some crazy things to them, and i still think the problem is on the distribution side. >> deplatforms and the non-platform ai companies a few months ago did sign the munich accords, which was an effort as a group to self regulate and try to prevent and try their best to prevent deceptive content from being generated with their
1:56 pm
tools. i think that is a good first step and it remains to be seen how effective that is going to be. i would like to see them partner more with society, to give a little more transparency into what is going on on platforms, what people are generating on generated ai tools. this is something society groups have asked for a long time, asking them to be more transparent on conversations that go on on their platforms, when they try to make interventions, to make it less toxic, but if he respects the interventions, how long are they working? always letting everybody else in to see how it is actually working, so more transparency would great to see? >> i inc. were things like the munich record or what the ai
1:57 pm
generative calipers signed, as well, it fell really short. neil is correct with chat gtp. failure keyword detection and negations and responses that say, sorry, we will not tell you how to vote but you are safe. there is a huge component to their bottom line that is api and accessing it. this goes for all of those models. that is really where it is the companies, the developers, making pretty big recommendations of third parties that are integrating the app, so the example is in order to integrate gtp four into your third-party organization, in the terms of service and openai, you have to disclose that you are
1:58 pm
using that technology and virtually no one would say yes because why would they? there is no regulatory hand forcing them, there is no openai check bear doing. there is a tiny of service. that is an accountability issue. you think of that in all the technologies and spaces where it is being deployed. you have some training of the model, yes, but it is an easy -- very easy to do. a lot of the trust stuff they build on top is optional. so as we think about the growth and development in the fact that microsoft is already in our pockets and they will search on millions, and millions of people will have that upscale time that social media has had historic way. it is incumbent on all
1:59 pm
developers who have the appropriate tools for their first and third parties. >> i will open it up to questions in a second. one more question, we talked about to some extent civil society, regulators government. what's the role of journalists in the coming election? i have a pet theory that at some point, the information environment comes bad enough that people have to turn back to something like authoritative sources or professionals who are willing to sign their names to things because they know they can not leave anything they see without someone telling them it is trustworthy, even if it were [captioning performed by the national captioning institute, which is responsible for its caption content and accuracy. visit ncicap.org] [captions copyright national cable satellite corp. 2024] >> we will break away at this point for live coverage of the u.s. house on this monday. lawmakers are considering a handful of bills today, including one that doubles u.s. customs area of operation in coastal waters. this to counter human trafficking and illicit drugs. another measure on the floor would protect children from onlineex

21 Views

info Stream Only

Uploaded by TV Archive on