Skip to main content

tv   Hillary Clinton Fmr. Google CEO Eric Schmidt Discuss AI Challenges  CSPAN  April 3, 2024 10:02am-10:33am EDT

10:02 am
>> buckeye broadband supports c-span. as a public service. along with these other television providers. giving you a front row seat to democracy. >> up next, a discussion on the potential impact of artificial intelligence in elections. featuring former secretary of state hillary clinton, and google c.e.o. eric schmidt from the aspen institute in khropl columbia university's institute of global politics. this is nearly 30 minutes. >> first, we are so delighted to have eric schmidt with us. especially because he is, as you just heard, one of our carnegie distinguished fellows at the institute of global politics. and he has been meeting with
10:03 am
students and talking to faculty about a lot of these a.i. issues that we have surfaced during our panels today. of course he wrote a very important book with the late drt artificial intelligence. article official intelligence. so we're ending our afternoon with eric and trying to see if question pull together some of the strains of thinking and challenges and ideas that we've hear. so eric, thank you for joining us. you look like you're in a very comfortable but snowy place. i wanted to start by asking you, what are you most worried about with respect to a.i. in the 2024 election cycle? >> well, first madam secretary, thank you for invitinger me to participate in all the activities. i'm at a tech conference in
10:04 am
snowy montana which is why i'm not there. if you lock at misinformation, we now understand extremely well that virallity, emotion and particularly powerful videos drive voting behave you, human behavior, moods, everything. and the current social media companies are weaponizing that because they respond not to the content but rather to the emotion because they know the things that are viral are outrageous, right? crazy claims get much more spread. it's just a human thing. so my concern goes something like this. the tools to build really, really terrible misinformation are available today globally. most voters will encounter them through social media. so the question is what are the social media companies done to make sure that what they are promoting, if you will, is legitimate under some set of
10:05 am
assumptions? >> you know, i think that you did an article in the m.i.t. technology review fairly recently, maybe at the end of last year. and you put forth a 6 -- six-point plan for fighting misinformation and disinformation. i want to mention both because they are distinct. what were your recommendations in that article to share with our audience in the room and online? what are the most urgent action that is tech companies particularly as you say the social media platforms could and should take before the 2024 elections? >> well, first i don't need to tell you about misinformation because you have been a victim of that and in a really evil way by the russians. when i look at the social media platforms, here is the plant
10:06 am
fact if you have a large audience, people who want to manipulate your audience will find it and they'll start doing your thing. they'll do it for political reasons, economic reasons or they're simply nihilists. they don't like authority. and they'll spend a lot of time doing it. so you have to have some principles. one, is you have to know who's on the platform in the same sense that if you have an uber driver you don't know his name or details but uber has checked them out because of all the problems they've had in the past. so you trust that uber will give you a driver that's a legitimate driver. the platform needs to know even though they don't know who they are that they're real human beings. the other thing they have to know is where did it come from? we can tech knowledgeically put water marks, the technical term is called stegonography.
10:07 am
you know roughly how it entered your system. you know how the algorithms work. we know it's very important that you work on age-gaiting so you don't have people below 16. so those are sensible ways of taking the worst parts of it out. i think one of things that i wrote about is if you look at the success of reddti and their i.p.o., what they did they were reluctant to do anything. it improved the overall discourse. the lesson i learned is if you have a large audience, you have to be an active management manager of people who are trying to distort what you as a leader are trying to do. >> that re ddit example is a good one because i don't have anything like the experience you do. but just as an ober, it seems to
10:08 am
me that there's been a reluctance on the part of some of the platforms to actually know. it's kind of like they want denybility. i don't want to look too close because i don't want to know. and i can tell people i didn't know. and maybe i won't be held accountable. but actually, i think there's a huge market for having more trust in the platforms because they are taking off, you know, certain forms of content that are dangerous in however you define that. and your recommendations in your article focus on your role of tributers. maybe go first, eric, in explaining us to, like what should we think about and more importantly what should we expect from a.i. content creators and from social media platforms that are either utilizing a.i. themselves or
10:09 am
being the platforms for the use of general -- generativea.n. how do we protect it even with a.i. or open source developers? is there a way to distinguish that? >> it's sort of a mess. there are many, many different ways in which information gets out. so if you go through the responsibility, the legitimate players, the offering tools and so forth, all have the responsibility to mark where the content came from and to mark that it's since synthetically generated. in orders we started with this. and we made it into that. there are all sorts of cases linebacker i touched up the photo. but you should record that it was so you know there's an altered photo it doesn't mean it's an in an evil way. the real problem has to be a
10:10 am
confusion over free speech. so i'll say my personal view which is i'm in favor of free speech including hate speech that is done by humans appear then we can say to that human, you are a hateful person and we can criticize them and listen to them and then hopefully correct them. that's my personal view. what i'm not in favor is of free speech for computers. the confusion here is you get some idiot, right, who is just literally crazy who is spewing all this stuff out, who we can ignore but the algorithm can boost them. there's liability on the platform's responsibility for what they're doing. unfortunately, although i agree with what you said, the trust an safety groups in some companies are being made smaller and, or are being eliminated. i believe at the end of the dayers these systems are going to get regulate and pretty hard.
10:11 am
you have amisalignment of interests. if i'm the c.e.o. of a social media company, i make more revenue with engagement. i get more engagement with outrage. why are we so outrage online? it's because the media algorithm are boosting the stock. most people it is believed this are more in the center and yet we focus on -- add this is true of both sides. everybody's guilty. i think what will happen with a.i. just to answer your question precisely is a.i. will get even better at making things more persuasive which is good in general for understanding and so forth. but it's not good for the standpoint of election truthfulness. hillary: yeah, that is exactly what we've heard this afternoon is that, you know, the authoritativeness and the authenticity issues are going to get more difficult to discern. and it will be a more effective
10:12 am
message. you know, i was struck by one of your recommendations which is kind of like -- it's a recommendation that can only be made at this point in human history. and that is to use more human beings to help. and it's almost kind of absurd that we're sitting around talk about well, maybe we can ask human beings help human position figure out what is and isn't truthful. how do we incentivize companies to use human beings? and how do we avoid the exploitation of human beings. because there's been some pretty troubling disclosures about the sweat shops of human beings in certain countries in the global south who are b you know, driven to make these decision and kit be quite, you know, quite overwhelming. so when you've got companies as
10:13 am
you just said got in the trust and safety, how do we get people back to some kind of system that will make the kind of judgment that is you're talking about? >> well, speaking as a former c.e.o., of a large company, companies tend to operate on fear of being sued and section 230 is a pretty broad exemption for those in the audience, section 230 it's sort of the governing body on how content is used. and it's probably time limit some of the broad protections that section 230 gave. there are plenty of examples where somebody was shot and killed over some content where the algorithm enabled this terrible thing to occur. there is some liability. we can try to debate what that is. if you look at it as a human
10:14 am
being, somebody was happened and there was a chip of liable but the system made it worse. so that's an example of a change. but i think the truth if i can just be totally blunt is ultimately information and the information space that we live in, you can't ignore it. i used to give the speech and say you know how we solve these problems? turn your phone off. eat dinner with your family. and have a normal life. unfortunately, my industry and i'm happy to have been part of that that made it impossible for you to escape all of this as a normal human being, you're exposed to all of this terrible filth. that's going to get fixed by the industry collab rat actively or collaboration. let's think about tiktok because tiktok is very controversial. t it is alleged that a certain kind of content is being spread more than others. tiktok isn't social media. it's really television.
10:15 am
and when you and i were younger, there was this huge frackous on how to regulate television. it was a rough balance where we said fundamentally it's ok if you present up with side as long as you present the other side in a roughly equal way. that's how societies resolve these information problems. it's going to get worse unless you do something like that. >> well, i agree with you 100% in both your analysis and your recommendations and in a very first time we talked about the need to revisit and if not completely eliminate certainly dramatically revise section 230. it's outlived its usefulness. there was appear idea back in the late 1990's when this industry was so much in its infancy. but we've learned a lot since them and we've learned a lot about how we need to have some
10:16 am
accountability, some measuring of liability for the sake of the larger society, but also to give the direction the companies. these are very smart companies. you know that. you spent many years at google. they're going to figure out how to make money. but let's have them figure out how to make a whole lot of money without doing quite so much harm. that partly starts with dealing with section 230. you know, when we were talk earlier about, you know, what a.i. is aiming at, you know? the palace were all, you know, very forthcoming. and we said you know, we know there are problems. we're trying to deal with these problems. we know even from the public press that a number of a.i. companies have invented tools that they've not disclosed to the public because they themselves assess that those tools would make a difficult situation a lot worse. is there a role, eric -- i know
10:17 am
there's the munich -- statement negotiated at the munich security conference chas start. but is there more that could be done with the public facing statement? some kind of agreement by the a.i. company than the social media platforms? you know, to really focus on preventing harm going into the election? is that something that's even feasible? >> it should be. the reason i'm skeptical that there's not agreement nonpolitical leaders, of course, you're a world's expert on that and the companies on what definition -- what defines harm. i have wandered around congress for a few years on these ideas. add i'm waiting for the point where the republicans and the democrats are in agreement on -- from their local and individual perspectives that there's harm on both sides. we don't seem to be quite that the point. this may be bauds of the nature of how president trump works,
10:18 am
which is always sort of baffling to me. bubu there's something in the water that's causing a nonrational conversation. this is not possible. so i'm keptup skeptical that that's possible. i obviously support your idea. the other thing i would say and i don't mean to scare people is that this problem is going to get much worse over the next few years maybe or maybe not by november. but certainly in the next cycle because of the ability to write programs. i'll give you an example. i was re cently doing a demo. the demo consist of you pick a stereotypical voter. let's say it's a hispanic woman with two kids. she has the two interests. you create a whole interest group around her. she doesn't exist. it's fake. then you have python to have five different variance of her
10:19 am
and different ages and background cho ming the same voices. so the ability to have a.i. broadly speaking generate entire communities of pressured groups that are, in fact, virtual. it's very hard for the systems to dethackett these people are fake. there are clues and so forth. but to me, this question about the ability to have computers generate entire networks of people who don't exist to act for a common cause which may or may not be one that we agree on but probably influenced by the national security for north korea or china or influenced by some business objective from the tobacco companies or you name it, i worry a lot about that. and i don't think we're ready. these -- it's possible just to hammer on this point for the evil person who inevitably is sitting in the basement of their home and their mother gives them
10:20 am
food at the top of the stairs to use these computers. that's how powerful these tools are. >> ok. [laughter] well, let's try to bring it back a little bit to where we are here at the university in this, you know, great setting of so many people who have a lot to contribute and working in partnership with aspen digital which similarly has a lot of convening and outreach potential. what can universities do? what can we do in research particularly on a.i.? how do we create a kind of, you know, broad network of partners like we're doing here between i.g.p. and apps digital. and we began to try to do what's possible to educate ourselves, educate ourself students, in
10:21 am
combating miss and disinformation with respect to elections. >> so the first thing we need to do is to show people how easy it is. i would encourage every university program to try to figure out how to do it. obviously don't actually do it. but it's real actively easy and it's really quite an eye openerrer in. and i've done this for a long as i've been alive. the second i would do is there are -- there's an infrastructure that would be very helpful. the best design that i'm familiar with is block chain base. it's a name and origin for every piece of document independent of where it showed up. so if everyone knew that this piece of information showed up here, you can then have prominent and understand how did it get there? who pushed it? who amplified it? >> that would help our security services, our national security people to understand is this a
10:22 am
russian influence campaign or sit something else? so there are technical things and there's also educational things. i think this is only going get fixed if there is a bipartisan, broad consensus that taking the edges, the crazy edges, the crazy people and, you know, who i'm talking about, and basically taking them out i'll give you an example. there was an analysis in the last -- in covid that the number one spreader of misinformation about covid online was a doctor in florida which is like 13 of all of it. he had a whole influence campaign of lies to try to convince you to buy his supplements versus a vaccine. that's just not ok in my view. the question for me why was that allowed by that particular social media company to exist even after he was pointed out? you have a morale, legal, and technical framework. but you have to be seen as it's
10:23 am
not ok to allow this evil doctor for profit allow people to mislead them on vaccinations. >> just to follow up on that -- i mean, i'll disagree about what has to happen if we're going to end up with some kind of legislation or regulatory framework from the government. but if they were willing, is there anything that the companies themselves could do as i say if they were willing to that would lay out some of the guard trails need to be considered before we get to the consensus around legislation. >> of course but -- of course, the sans yes. but the way it works in the company, you don't get to talk to the engineers. you get to talk to the lawyers. and the lawyers are very conservative and they won't make commitments. it's going to require some kind of agreement among the leadership of the companies of what's inbounds and what's out of bounds, right?
10:24 am
and getting to that is a process of convening and conversations. it's also informed by examples. so i would assert for example that every time someone is physically harmed from something, we need to figure out how we can prevent that. that seems like a reasonable principle if you're in the digital world now. working way from those principles is the way it's going to get started. it's not going to happen unless it's forced by the government. the best way to make it happen in my view is to make a credible and feel proposal about where the guard rails are. we've been working on this. and you have to have content moderation. when you have a large community, these -- these groups will show up. they will find you because their only goal is to fine an audience to spread their evil. whatever the evil is. and i'm not taking sides here.
10:25 am
>> >> well, think i the guardrail proposal a really good one. obviously, you know, we here at i.g.p., aspen digital, the company who is here, others, researchers who are here. maybe people should take a run at that. i mean, i'm not naive, i know how difficult it is. but i think this is a problem we all recognize. it's not going to get better if we keep wringing our hands and fiddling on the margins. we have to try something different. so -- >> let me just be obnoxious. i sat through all these safety discussions for a long time. and these are very, very thoughtful analysis. they're not producing solutions in their analysis that are implementable by the companies in a coherent way. here's my proposal. identify the people. understand the providence of the data public your algorithms. be held as a legal matter that
10:26 am
your algorithms are what you said they >> reform section 230. make sure you don't have kids and so forth. etcetera. you know, make your proposals, but make them in a way that's implementable by the team. if there's a particular kind of piece of information that you think should be band, write a specification well enough that under your proposal, the computer company can stop that, right? that's where it all fails because the engineers are busy doing whatever they understand. they're not talking to lawyers much. but the lawyers prevent anything from happening. they're afraid of liability. they don't have leadership from the coming for the reasons you know. and that's what we're stuck. >> well, that's both a summary and a challenge, eric. and i -- i particularly appreciate that and especially the work you've been going to try to, you know, sort this out and give some guidance.
10:27 am
so you get the last word from beautiful snowy montana. the last word to kind of offer to that challenge, you know, ask us to respond to follow up on what you've outlined as there's one path forward and try to do it in a collaborative way with the companies and other concerned parties. >>s the snowstorm is hitting in behind me. look, i think that the most important thing that we have to understand is this is our generation's problem. if this is a -- this is under human control. there's this sort of belief that none of this stuff can get fixed. but you know from your pioneering work over some decades here, that with enough pressure you really can bend the needle. you just have to get people to understand it. these problems are not unsolvable.
10:28 am
this is not quantum physics. it's a relatively straightforward problem about what's appropriate and what's no. the a.i. algorithms can be tuned to whatever society wants. my strong message to everyone in columbia and of course, tall partners is instead of complaining which i like to do a great deal, why don't we collectively write down the solution, organize, partner institutions, try to figure out -- how to get the people in power to say, ok. i get it, right? that this is reasonably bipartisan. it makes society better. there's this old rule about greshham's law that bad speech drives out good speech. which is the internet is a cesspool. i used to say that and i would say i don't like to live in a cesspool, turn it off. the damage that's being done online to women, that's just
10:29 am
horrific. why would we allow this? you just have to have an attitude -- i'm trying to fund some open source technology that are open tools detect bad stuff. it's going to take -- it's going to take a concerted effort and i really appreciate your secretary -- your attention on this. somebody's got to push hillary: well, you and it let's keep going eric. i'm so grate to feel you. i hope you a great time in the snowstorm and whatever else comes next. but let's show our appreciation to eric schmitt for being with us. thank you so much. [applause] >> thank you all. hillary: well, i think we have a call to action. we just have to get ourselves in the frame of mind that we're willing to do that. appear even writing something down will help to focus our, you know, mind about what makes sense and what doesn't make sense. we're not going to let you all off the hook. we want to come back to you.
10:30 am
we want to have something come out of this. we can talk about this, meet about this until cows come home. but in the meantime as eric said and i agree, lit just get worse and worse. and we have to figure out how we can assert ourselves and maintain the good and try to deal with, you know, that which is harmful. please join us in this effort. as i say we will come back to you and seek your guidance and your support. thank you all very much. [applause] >> american history tv. saturdays on c-span2. exploring the people and events that tell the american story. at 7 p.m. eastern, our american history tv series, congress investigates. looks at historic congressional investigations that led to changes in policy and law. this weekend the truman
10:31 am
committee, headed by senator and future president harry truman, examined the national defense program during world war ii and whether there was waste and corruption in defense contracting. 8 p.m. eastern on lectures of history, university of kentucky writing and rhetoric professor on the legacy of maimy till mobley. at 9 p.m. eastern on the presidency, a discussion about president grant's military service, presidentcy, and legacy. exploring the american story. watch american history tv, saturdays on c-span2. find a full schedule on our program guide or online any time at c-span.org/history. >> celebrating the 20th anniversary of our annual student cam documentary competition, this year c-span asks middle and high school students across the country to look forward while considering
10:32 am
the past. participants were given the option to look 20 years into the future or 20 years into the past. in response we received inspiring and thought provoking documentaries from over 3,200 students across 42 states. our top award of $5,000 for grand prize goes to nate coleman and jonah rothlein from weston high school. incense held hostage, navigating past and future conflicts with iran. >> it is evident that in the next 20 years the united states must make more policy that places heavy restriction on all americans traveling to iran. because not only will we see a less hostage taking, but the united states will no longer have to participate in such considerable negotiations with iran. >> congratulations to our winners. watch the top 21 winning documentaries on c-span every day this month. starting at 6:50 a.m. eastern or any time online at student cam.org.

15 Views

info Stream Only

Uploaded by TV Archive on