Skip to main content

tv   Discussion on AI 2024 Election  CSPAN  April 16, 2024 5:36am-6:26am EDT

5:36 am
5:37 am
[applause] >> thank you. so you heard a little bit earlier this afternoon from sona
5:38 am
talking about the state of local news. and later from brian talking about the state of national news. and right now i'm going to talk for a few minutes about the state of digital media. the story of digital media sort of follows the trajectory of a tail as old -- tale as old as time. it's a story of hope followed by disillusionment, and in theory the last stage is redemption. i'm not sure we are at redemption yet. let me go back to hope. you may recall or some of you may recall or maybe some of you if you are students are too young that back in the mid to late oughts and early part of the 2010's the world was a pretty exciting place when it came to digital media. google really did bring the world's information to our fingertips. we connected with old friends there high -- from high school or other parts of our life on
5:39 am
facebook. we had access to so many incredible videos on youtube. when it comes to news, it was revolutionary. you may remember that the arab spring in 2010 and 20116789 the miracle on the hudson. the ways we had access to information because people on the ground with access to these mobile devices, these powerful computers in our hapbz, were able to report, just regular people, on what they were seeing, what they were experiencing. it was sort of the promise of citizen journalism. we learned as it turns out about the capture of osama bin laden by a guy, a few -- quarter after mile away saying i'm hearing choppers overhead. this is strange. i'm in -- forgot the town. what's going on? it was the democratization of information. or so we thought?
5:40 am
fast forward we get to the disillusionment where everything began to change slowly and all at once. what is it that went wrong, exactly? i would put in three categories. people, people is what went wrong. you had foreign actors who tkpwaeupld the system -- gamed the system of the the internominate research agency in st. petersburg were able to use these social media platforms to manipulate public opinion in the united states in the run-up to the 2016 elections. we had domestic folks were targets who then willfully spread those and other kinds of misand disinformation. whether it's about elections. whether it was about covid. whether it was the reasons that the crash happened at the bridge in baltimore last week. then you have the profiteers. no ideology other than making a few quick bucks. they were able to game these platforms to bring money to
5:41 am
themselves through various means. that was one category. the second cats gorery was the tech companies and platforms themselves. whether or not they did not anticipate what could have happened with these platforms, or whether they knew that their platforms could be gamed and didn't care, we do not know. we may never know. with you we started to see that social media platforms became basically a big game of whack-a-mole. starting in 2014, 2015 and continuing to this day. then after 2020 a lot of the social media platforms decided, threw up their hands and said, enough. we are not going to try to moderate, necessarily moderate content on our platforms. it's too expensive. it's too politically fraught. too many people hate us no matter what decision we make. too many subpoenas come interesting this congress. they are like, we are out. we are going to leave it alone. you have e-hropb musk who bought
5:42 am
twitter, never perfect, but always an incredibly valuable tool turning it into a dumpster fire it is today. the third gatt roar i of what went wrong is the decline of local news. we have been talking about so much about today. don't need to repeat t it has played in, into that void, has filled in so much of this kind of noise that we hear from platforms. news organizations sometimes did it to themselves. they were chasing in some cases those of us in news, we all laugh at the line about the pivot to video. which was supposed to save journal i. it didn't because as soon as facebook changed their strategy, the whole thing went down in flames. the money that supported journalism went to the platforms. a lot of it for good reasons. advertisers, it was more efficient how they reach people. and also the world is -- has become such a polarized place
5:43 am
that many just do not trust any news. again, we have been talking about that all afternoon. and now here we are, this is going to be a bit of a segue into the panel that i'm going to be part of that's coming up next, into the a.i. world. and what is that going to do to this already very fraught digital information ecosystem? my message to you, i may repeat this on the panel with the right opportunities, don't be afraid. there's been a lot of coverage about the big spectacular deep fake of trump doing something or biden doing something. that may happen. i don't think that's a big worry. those will be debunked so quickly that those are not going to get a chance to get much traction. what i'm much more worried about when it comes to content that is manipulated via artificial intelligence is the things that you can't see. that the media can't see. that the public can't see and debunk. it's coming in on what'sapp.
5:44 am
it's coming in on tp-pblg messenger, tell gram -- telegram, all of the peer-to-peer, not always peer-to-peer, you can distribute those. the content that you can't see that can be very damaging. and very targeted information that can be to what a.i. enables is a speed and scale and degree of targeting never before possible. if before you needed the internet research agency funded by the kremlin in st. pete yoursburg to pull this off, now you don't. we are back to the proverbial guy in his pajamas at his parents' house who can make the same effort at the same scale to cause a lot of trouble. but if there is one thing that worries me most, it is a phrase that was coined by two people. one a u.t. researcher named
5:45 am
bobby, and another academic, danielle, and it is the liars dividend. the liars dividend describes the phenomenon of what happens when we hear -- we are hearing from so many different places how we can't trust information. that we -- a.i. can manipulate video, it can. audio, which it can. it can manipulate images, which it k instead of trying to find out is this real, not real, going to trusted sources? what do we do? we stop believing in anything at all. and this is the liars dividend. it is out of the playbook of autocrats and would be autocrats going back for millennia. now enabled by a.i. i was thinking for those of you that were here last night and you heard woodward and bernstein. carl bernstein made a comment that public opinion changed when
5:46 am
people were finally able to hear richard nixon's tapes. think about what would happen today if those tapes were released. fake news. this is a.i. manipulated audio. that wasn't me. you know what? a lot of people would be like, yeah. i don't know that i can believe that. that's the world we live in. yeah, i was supposed to talk about redemption. i don't know i have the redemption yet. if there is redemption to be had it is in the promise and the growth which so many people here in this session, you heard it in the first panel, have the promise of local news. local news is -- there is no silver bullet to all these problems. if there is any salvation to be had, it is in local news and the growth of local news, people of the community, in the community, communing with the people who are their neighbors, providing them the information that they need, listening to them, and building that trust.
5:47 am
we know that leads to civic engagement. we know how important it is. we must all support efforts whether it is press forward or any of the other efforts you heard from sarah, the american journalism project, or the work that elizabeth was doing. we all really need to support that. it is the only real way out. and with that we are going to move into our a.i. and elections panel. i am happy to introduce my fellow panelists, secretary of state cisco aguilar. secretary of state scott sha a wap -- schwab. open a.i.'s beck yea waite. and our moderator, dr. tala schwab -- talia schwab. [applause]
5:48 am
>> thank you so much for that. such a pleasure to chat about digital media and the 2024 election. we have a hot topic here. just a small topic. and just to offer some introductory remarks, we are in a remarkable setting right now in 2024. we have elections in over 630 countries, plus the european union. representing just under half of the world's population. which is mind-blowing. we have seen a.i. used in elections. we have the a.i. generated robocall impersonating president biden that sought to discourage voters in new hampshire's primary. we have auto clips of a liberal party leader discussing vote rigging. we have a video of an opposition leader in the conservative muslim majority nation of bangladesh wearing a bye keeny. there are so many things to talk about and possible uses of a.i. as we look to the election. i'm looking forward to this
5:49 am
conversation. i want to just get started. we are delighted to be joined by becky, who is the head of global response at open a.i. and becky, i want to just dive right in. open a.i. released details about its approach to the 2024 elections earlier this year. and they noted that the rule that they have is that people cannot use open a.i. tools for campaigning or for influencing voters. can you talk to me about enforcing rules like that on a global scale? it seems almost unfathomable. tell us about it. becky: yes. thank you so much for having me. and for this l.b.j. having this forum and discussion in this milestone election year. it is more important than ever to this these sorts of conversations that bring together particularly folks across governments, civil
5:50 am
society, and industry to have these important discussions. i spent a lot of the last six months speaking with policymakers and civil society around the globe to understand what is top of mind for them going into this election year. while we are very excited about the significant benefits of this technology, we also are clear eyed about its potential risks. and through those discussions, through that dialogue, we have developed a preparedness framework that focuses on three efforts. first, the policies. making sure we have the right policies in place. second, preventing abuse. and third, elevating appropriate information and transparency in our tools. first as you mentioned our policy lines, we noted that we don't allow political campaigning and discourage participation with our tools. we wanted to have a set of
5:51 am
policies that were a little bit more conservative this go-round given that we haven't seen generative a.i. in the elections space before. we wanted to make sure we were really taking a conservative approach out of an abundance of caution. second is preventing abuse, this is getting to the enforcement piece. we think about safety through the entire lifecycle of our tools. it's not a single point. to use, if you'll excuse the bad metaphor, if you are going fishing and have a bunch of g net, you don't use just one, you use several nets so you catch as many fish as possible. we think about safety in the same way. we have interventions across the entire lifecycle of our tools to make sure we are enforcing on different harms. one example of intervention that we might have is something called reenforcement learning with human feedback. i don't know how many people here have used any of our tools
5:52 am
or generative a.i. in general, but one thing that we have in our chat bot tool, our large language model, is this reenforcement learning where it's they very front end how we think about safety. it's a fancy way of saying we ask a question of the model, we generate a bunch of responses, and tell the model which one is best. by doing that we can steer the model over and over to something that is safer, more reliable, and more helpful. that's how we reduce the likelihood that it's going to produce these harmful responses. t -- finally, the last piece is around transparency n our model we look to elevate phroept sources -- appropriate sources and site where information is coming from where appropriate. i guess final thought beforehanding it over to someone else on the panel it's ever more
5:53 am
important we have these sorts of conversations. and that we are collaborating not just across industry but also within industry. we are really excited about. so work that we are doing with our social media companies and others where generative information might be actually distributed. making sure that we have those close connections going into the election season. talia: speaking about close connections. open a.i. has a partnership with the national association of secretaries of state. and what an honor for to us have the president of this organization with us, secretary scott schwab. thank you so much for being here. let's get started about thinking about kansas. when you think about kansas, what a.i. or foreign influence threats worry you the most? secretary schwab: this is where we come from looking at -- there is a difference between the campaign side and the election side. and often people commingle them. as a secretary if you get that
5:54 am
fake biden phone call, i'm not concerned about that because that's campaign side. we'll let our ethics commission deal with that. our bureau of investigation and what not. but on the campaign side, this is where it can get to be really a concern. i hate to use these examples because then when you use the kpafrpbls you get people -- you give people ideas. if john son county is a -- johnson county is a wealthy county. i get the honor to live there. i love t but it's a very -- it's a purple county. so imagine if somebody generated a video that was shared on social that said, due to bomb threats, all johnson county polling places will be closed on election day. imagine the chaos if someone used my image and likeness. we already know that sometimes news is so quick. we got to get this out there. now i'm out there saying that's not me. now the news is trying to say, what's real?
5:55 am
and then when it's all sorted out there was no bomb threat, it was fake. how many voters did you affect i'm not going to take a chance. i'm not going to vote today. you can't undo that. those are the concerns -- i really like what minnesota is doing as relates to a.i. if you generate an a.i. image or video or a voice and it doesn't have a disclaimer saying that it's a.i., then you are using it to influence -- influence a campaign or election, it's a crime. they carved out you can do satire. the "saturday night live" exemption. those are the things on the election side that become terrifying. you are not misrepresenting a tkaepb candidate. they -- a candidate. they can undo that. there are two great motivators for humans to make decisions, hope and fear. fear's stronger. it's a lot easier. so if you get -- cause voters to
5:56 am
have fear to not vote, how does that truly influence election? that's outside the campaign side. as secretaries that is the conversation we are having. talia: disturbing scenario. secretary schwab: now somebody is on the internet i have an idea. those are the concerns. talia: we have a student in the audios i have an idea how to deter. secretary schwab: social security-r sos. chaos.gov and hope us. talia:you authored an article in foreign affairs earlier this year where you said generative a.i. companies in particular can help by developing and making available tools for identifying a.i. generated content and ensuring that their capabilities are designed, developed, and deployed with security as the top priority to prevent them
5:57 am
from being misused by nefarious actors. how well do you think generative a.i.s are do. secretary schwab: the example she just gave. creating safety nets. you don't know until -- when we hear the phrase -- in the article it's got to be -- safety has to be your top priority. when boeing has a door come off an aeufrp -- airplane, what's the first thing they say? safety is our top priority. i get t this still happened. a lot of times you don't know what the holes are until after it happens. we don't know how they are doing. we can come through november of this year and -- i'm curious about your opinion you are more of a swing state than kansas. there is a good direction which way kansas will go. but there are concerns that we won't know until after the election what will happen. my bigger fear is not this presidential election. it's going tonight one in four years because it will be more
5:58 am
developed. and -- nobody knows how to truly weaponize a.i. right now. but in four years i'm pretty sure they will. in 2020 our biggest concern was misinformation. we got hit with the pandemic. the gray philosopher yogi berra once said the problem with trying to predict the future it keeps changing. that's what happened in 2020. now 2024 -- in 2020 bedidn't -- we didn't deal with a.i. now it's like a freight train. we'll know about it in 2024, we don't know how creative people have come to deploy ttalia: really interesting. and some conversations we had before also mentioned. the things that will develop between now and 2024 we need to look at that. secretary, i want to bring you into this conversation as well. in january, you said that addressing a.i. threats to electoral integrity will be a partnership between the federal government, private sector, and local governments.
5:59 am
i'm hoping you can give us a progress report on how much progress has been made in helping state and local governments understand the threats and how to approach them. >> no progress. and reesely, real -- it's really, really frustrating. especially when a high level federal official arrives in your state and they ask you what are you doing about a.i.? you look at them and go you want my state to step up, put the resources behind something that is receiving billions of dollars of investment, you are the federal government. you have access to researchers. you have access to information. i can only treatment about. i'm trying to figure out how to get 17 counties across our battleground state to be able to use legacy systems that exist to transfer on to a statewide system. and you are asking me to be the leader on a.i.? it's unfor the tphafplt and unfair. talia: not a good progress report there. hopefully someone -- it's
6:00 am
unfortunate and unfair. talia: not a good progress report there. hopefully -- secretary aguilar: this is a thing that is impacting the rest of our country. it's not just an issue in nevada. for me to have somebody ask me how i'm approaching a.i., i think it's unfortunate. especially when the federal government has not had a hearing on funding. everything in the election space from the federal side is react. there is no strategic plan. there is no sustainable strategic funding in elections. even though it's deemed critical infrastructure, nobody's saying what is it we are going to do? everything we are doing is reactive. sorry to be the downer. i can tell you a lot of great things we are doing in nevada when it comes to election and voter engagement. when it comes to this issue, this is one that's catching all of us. talia: everything is evolving so quickly we are all thinking about what could be -- we don't
6:01 am
know yet. secretary aguilar: we'll get the issues -- catch the shall shoes of the earn nateor. i had an opportunity to participate in the a.i. project. and they brought a few of us election leaders into a raofplt we tested several of the chat bots. the information that was coming out about nevada specifically was wrong. and when you talk about these issues -- we don't know when somebody's getting this bad information. we know it's a younger voter that's going to go to a.i. and use it to ask the questions to become educated. if somebody's turning 18 how do i register to vote, and the chat bot tells them you have to register three weeks before the election, which is not true in nevada. nevada we have same day voter registration. this young person is going to walk away and continue with their day. we just lost a voter. and that's what scares me.
6:02 am
i will have no idea that happened. because the information being given is wrong and false. talia: the fact we can say that in the united states where there is lots of available information that models can be trade upon. then we think global scale which vivian, i know you at aspen you have been doing work thinking about this and hosting a lot of public conversations and discussions about a.i. and elections. how does the threat and strategies to counter possible threats change when we think about this globally? vivian: as you said in the opening, this is a year of -- record year for national elections. we have been able to see, you mentioned some in your open, too, as we count down to the people voting for the national -- for the -- in the fall, we have been able to track what we have seen happening in other national elections in the united states. and is it -- can we point to
6:03 am
anything that said this changed the vote? we don't know. i think secretary schwab had it right, we are not really going to know the impact on anything of 2024 until we are able to study it afterwards, if we k if we can get access to the data, which is another issue and another panel we should have. but we have seen in every single national election a.i. used. along the lines of some of the examples you gave about vote rigging, or leading candidate in a very muslim country, picture of her in a bikini. it's terrifying. the messaging, again, it's the stuff you can't see. it's the what's as app messages. the telegram messages. that's where this information, misinformation, false images, false audio can travel. and the ability to generate this
6:04 am
kind of content highly targeted, customized, personalized to your district at scale is unprecedented. talia: we are going to continue on this frightening theme for a second. becky, i want to ask you something. you live in a world of open a.i. you live in a world of a.i. and thinking about the challenges. you do it as a global level. it would be really fascinating for all of us to think about from your sraptage point -- vantage point what's the worst case kase alaskan tpheuro in the upcoming election and what can be done to circumvent that? . becky: what you mentioned in your opening remarks, the harm that we have seen to date is really around deep fakes that we've actually seen play out at a global scale in these elections.
6:05 am
we've heard over and over and over again in our conversations that this is the thing that they're most concerned about. i think the technology today is not yet at a place where some of these more large scale risks and concerns could take place, but there are audio-visual models available, audio images and video that could be leverages and -- hrefbg randled and that -- leveraged and that out of context information or misinformation could result in some really scary outcomes. what we're doing, we only have an image model currently available. we don't have a commercially available audio or video product. but for images, we have a mitigation in place with the front end where you can't create an image of a real person. so if you ask the model to create an image of an elected official, a secretary of state, an election official, it won't create that, it will refuse.
6:06 am
that said, we know that images that are seemingly innocuous could be taken out of context and that can be equally harmful. and so i think one of the things that you mentioned in your article earlier is really around provenence. so understanding the origin of these audio-visual models and their output is a huge part of the work that we're doing internally. to get a little bit more context on what that looks like, there's something called c2pa, it's a fancy term for just a piece of data that's attached to an image. you can think of it like a passport. so as you go around to different countries, you bring your passport. similar to this image. if it travels around the internet, has this piece of data attached to it. and with that data someone on the other end can identify where it originated. that it was produced by dali3 or image generation tool, if it was produced by a different model or if it's an authentic image and
6:07 am
it was perhaps an image that kwaeupl from the basketballs, -- image that came from the bbc who is another entity that's signed on to c2pa. this is by no means a perfect solution, if the image is modified, it can lose that data. but it's a really good step in the direction of creating an industry-wide standard where all of these different platforms can talk to each other. our tools are not a distribution platform, people don't distribute content from our services but they can go to a distribution service and upload it to any one of the social medias. so making sure that we have common language that we can use is really important. the last thing i'll say on that is i think it's really important that we continue as an industry to push forward the research in this area around prove nance, -- provenence, as i said, the current technology is by no means a silver bullet. one of the things i'm very excited about that we're working on internally is something called a classifier.
6:08 am
that effectively allows us to look at a whole bunch of imactionages and -- images and with very high accuracy identify which ones came from our tools. the thing that i'm excited about it is that it does that even when the image has been modified in ways that you see out in the wild on the internet. so on social media things get cropped, text gets laid over it, and it continues to be able to identify those images with high degrees of accuracy. that's the kind of stuff that we need in order to be much more robust with these kinds of harms. so, that's sort of top of mind for 2024. talia: i think that's great, making those sorts of things public in the way you called for in your article, secretary schwab, i think is a really exciting way to think about how this could help. how things could be more productive moving forward. i want to return, vivian, you kind of started to mention that encrypted messaging apps like
6:09 am
what'sapp are a particular issue. and secretary aguilar, you've mentioned that multiple languages are a particular issue and a concern, especially in a place like nevada and here in texas. can you tell us a little bit more about what you're thinking in terms of a.i. and foreign influence and why things like multiple languages and maybe elaborate a bit more on vivian's comments about whatsapp and other encrypted messaging apps? sec. aguilar: we have a large latino population in nevada. they are going to determine the outcome of some of our most critical elections but it goes back to even a.i. and translation of information, again, at the a.i. democracy project, we went through some of the translation projects and the information, the way information was translated into spanish, the tone was very festive and very party-like and i think when you're talking about the seriousness of elections, that's not going to translate very well
6:10 am
to that voter so that's a big concern. and we also did it into hindy and the way hindi was translated and translated was so strict and so critical that i think a voter would hear that and would be afraid to actually vote because it was so strict and so direct. so the tone of translation is very, very critical. but the majority of participate don't speak second languages -- people don't speak second languages so they're not able to understand the impact translation is able to have. vivian: i think that's -- talia: i think that's one thing you think about. not only the scale in terms of how quickly people can create messages using a.i. but also how far they can spread, given distribution channels. and then adding language to it. sec. aguilar: the data that's being relied upon in a.i. machines is not generated by these communities either. and so there's a sense of already existing bias.
6:11 am
how do you ensure that you're actually being content-appropriate in those translations and that information being given? talia: yeah. what a good point. in thinking about how we deal with all of this, vivian, i want to come back to you. because given your background in the news from "the new york times" and the guardian, i'm hoping you can tell us about what you think -- how prepared are the news media to deal with this? what role should they have in this? vivian: could be more prepared. the efforts -- at it's as pen institute, our objective is -- at the aspen institute, our objective is to share information across groups because i think that is the biggest gap. we presented it at the national association of secretaries of state, about what are the risks and what are the mitigations. we have a meeting coming up in two weeks for technologists in
6:12 am
silicon valley. in fact, the secretaries are going to be there, which i'm very grateful for, to help the tech companies, those who are not as informed as becky about what the risks are, to help them understand what are the challenges that those on the ground are facing so they can also come up with mitigations. and the third part of it is to make sure that the media is ready for how to cover these issues when they inevitably come up. because there is -- i come from news media, i'm in my heart a journalist. i will always consider myself -- my primary identity is journalist. that said, my fellow journalists don't always necessarily do the right things. i think there is sort of a little bit of an overblown coverage of the big spectacular deepfake which can lead people to mistrust everything, which i mentioned on the podium earlier. and at the same time not -- just being prepared for how to cover
6:13 am
when these kinds of, not just the deepfake but the big, shiny, spectacular deepfake, but the kinds of things that secretary aguilar was talking about in terms of what's happening with language translation, whether it's get out the -- you know, fair-minded, good-hearted get out the vote or whether it's nefarious actors who are trying to dissuade those using easily accessible language tools or what stories may be traveling across messaging apps and really making sure that they are prepared. the most -- really the only thing we can do i think for tpwaour is to make sure -- 2024 is to make sure that the public understands when -- whatever they hear, whatever they see, whatever strange robo call they get about a bomb -- bomb scarce at all vote offing -- scarce at all of the voting place, that they know where to go to check it out. so it's, whereas your website again?
6:14 am
sos.kansas. com. sec. schwab: also important too, if it doesn't say.gov, chances are it didn't come from us. we had significant cyberprotections by using the dot-tkpwofb. there's a lot of im-- dot-gov. there are a lot of imitates that are are dot-co. vivian: those in charge of election integrity who will be the first -- or local election leaders in their communities, or trusted news media. talia: making sure that people have those relationships so they will go to trusted media outlets. sec. aguilar: on trusted media thing, i have an opportunity to speak with teachers today. what bothers me now about trusted media is majority of americans and students don't have access to that media because they're behind a pay
6:15 am
wall. and that pay wall is a huge barrier to people having the opportunity to get good journalism. and i think when it comes to elections, i wish the elections information would be -- the pay wall would be removed because it's in the public interest to ensure people have strong information available to them. vivian: i will say, a lot of the nonprofit local media, some of whom report represented by -- who were represented by organizations today have opened up the pay wall. sec. aguilar: it's tkpwraetd but we need to ensure they have the resources to exist to do the journalist they people in -- journalism they need. talia: that raises us to the next question which i'd like to ask of all of you which is, if you have the ability to enact one policy with respect to a.i. and elections, or realistic policy, so this is not the magic wand scenario, this is a realistic policy that you can enact, i'm curious to hear what
6:16 am
it would be from each of your respective perspectives. who wants to start? sec. schwab: i spent 19 years in the legislature and i love making policy, being a chairman was one of the greatest honors i ever had because you get to -- and when i was a chair of financial institutions, the biggest issue was uber because the question was, who is responsible for the insurance on the vehicle, right? and if you remember, back in 2016, maybe it was 2015, uber canceled their network in kansas and if you opened uber it said, please respond to chairman schwab about getting uber again in kansas and so you clicked on it and imagine their network, it shut down our server in our capital because it got all these emails. so i took down uber. but we struck a deal and what not. but -- so these are the things,
6:17 am
i believe in a free market so there's a freedom there, right? but it does work through critical infrastructure tha*euts subject to regulation -- infrastructure that is subject to regulation. if you're engaged in commerce in the united states, you're subject to regulation. which is fair. i really am spending more time on that minnesota law, saying, no -- i i'm not saying you can't create it, but you have to be honest about what it is. and if you're not, then we're going throw the book at you and it's going to be a financial -- it's financially going to hurt you and if you're just a college kid or if you're a foreign adversary, it's still going to be a cost because maybe the federal government won't go after you, but minnesota has a national guard, they can still sue and have jurisprudence across oceans, right? it's more of a challenge but at least you set, hey, this is the standard of what we're going to do. outside of that, how can we make laws better? well, you're setting a great standard. do we put that in statute?
6:18 am
i don't know. i'm going to hand that part off to you. becky: that's a great pivot. from our lens, it really is about standardization. i mean, taking, again, thinking about provenence, that's just one example where there has been, across not just tech industry but also the news media, some amount of standardization that has occurred organically but i think for that to apply to other areas of this technology there, does need to be some momentum that is across, not just industry but also government. one thing you were saying earlier is how do the models respond consistently? one way we can do that and we're exploring is identifying a whole host of representative views so we understand what model
6:19 am
behavior should look like. but i don't think 1,000 people in silicon valley should be responsible for determining what that looks like. and we're trying to make good strides towards that absent standards but that's something we need to pull together. a whole bunch of minds across multiple industries to figure out and to roll out a clear, consistent way to do some of these really tough things and this needs some technology. talia: i love that collaboration and thinking through how you would enact these. we have a researcher that's been doing some work to try to figure out if you displayed that to people, just the raw information of where an image came from, will that effect whether people find it to be true or false? she found it does have some really beneficial effects. so i think there's some optimism behind doing that sort of work which i think is really exciting. ok. jump in, what do you think? sec. aguilar: i'm going to go back to my days in grammer school and say, every time you wrote a report you had to use a primary source. and so if these chatbots and these merchandizes were able -- machineries were able to use
6:20 am
data only from primary sources, those being dot-tkpw-fbgs ov -- dot-gov websites and making sure those are being used. if we could go back to dot gov, go back to statutes and use primary sources as the source of information. vivian: i'll harkin back to this minnesota law that secretary schwab was referring to. we do need congress, the united states, the federal congress to act and to create legislation along these lines. again, we tkwoepbt want to ban synthetic media, that would be ridiculous. but we do want disclosure, mandatory disclosure and real penalties for those who not and also to give the federal election commission more teeth. the federal election commission, most people don't realize, they're really only about campaign finance. they don't have any other authority.
6:21 am
and probably shouldn't have a lot of authority by the way. i'm not recommending that. i think it's right that the election -- elections are handled by the states but when it comes to this kind of disclosure around the use of synthetic media, a.i.-generated false content, i think there is a role and i know that there's many members of congress of both parties that are agree that this is a role that the f. e.c. can play. talia: great policies. if i had that pha*pbg wand right now -- magic wand right now, i'd do it. for our last question here, i want to end with giving people a sense of where we're all coming from. so that after the 2024 election we can all reflect back on this panel and what we think. so, i want to redo some headlines about a.i. and elections and i want to hear from each of you in our short time left whether you think that the fears of a.i. are overhyped, underhyped or just about right. the guardian says, disinformation reimagined, how
6:22 am
a.i. could erode democracy in the 2024 u.s. elections. pointer, how genretific a.i. could help foreign adversaries influence u.s. elections. from foreign affairs, the coming age of a.i.-powered propaganda. overhyped, underhyped or just about right? vivian? vivian: the other great philosopher, that being taylor swift and say, two things can be true at the same time. so it can be overhyped and underhyped at the same time. that's my answer. talia: do you want to offer 30 seconds more of explanation for that response. vivian: first of all, we don't know what's going to happen in 2024. i think there is reason to be very concerned about the impact of synthetic media, a.i.-generated false information on the election. so it is not overhyped. we all need to be aware of that. for the reasons i was mentioning before. so people know where to check to make suring is is true. that's the underhyped. the overhyped is that i fear
6:23 am
we're going to just make people stop belief, like i've said over and over again, that they won't believe anything and that's got nothing to do with technology. tech companies' policy. that's a massive societal issue that could be incredibly damaging. talia: ok. we have to be quick now because our time is almost up. over, under, just about right? sec. aguilar: everything as long as we're prepared to respond to it. collectively. talia: ok. becky: i think it's important for us to be clear-eyed about what's coming. i don't necessarily think that all of those things are likely to happen right now with the current technology. but i do think it is very important that we are aware, that we're building awareness of what might be coming in the future. so that we are prepared. that's sort of core to the way that we deploy our models. we deploy models even that might be very stupid so that people might be able to understand where this technology is going
6:24 am
and i think it's critical that folks are clear-eyed going in so we can start building mitigations for the risks of tomorrow. sec. schwab: first off, a huge kansas city chiefs fan, i love travis kelce. bonus point for bringing in the swifties. as the son of a military guy, you expect the worst and hope for the best. so both would have to be true. so we expect the worst and hope for the best. and i think cisco would agree. we're going -- you're going to have good elections this year and you're going to be able to trust the results. you just may not be able to trust the people who tell you about the results. [laughter] talia: you heard it here. and check do the gov. please join me in thanking our panelists so much. [applause] [captioning performed by the national captioning institute, which is responsible for its caption content and accuracy. visit ncicap.org] [captions copyright national cable satellite corp. 2024]
6:25 am
6:26 am

9 Views

info Stream Only

Uploaded by TV Archive on