17 Burst results for "Humanity Institute"

"humanity institute" Discussed on Heartland Newsfeed Radio Network

Heartland Newsfeed Radio Network

07:45 min | 7 months ago

"humanity institute" Discussed on Heartland Newsfeed Radio Network

"Seven, three, three with you tonight you've got Ian the nobody formerly known as rich Paul. hoops mark. That was my bad try that again and mark excellent you sound better with a microphone helps. So, welcome to the program. We will take your calls about absolutely anything you want to discuss coming up mark. You're gonNa tell us about the college students of America and apparently growing numbers that them are now supporting what is described in the story as violent censorship. So just normal censorship of burning books or whatever, but actually just coming up in attacking people right for their words well. Basically for people visiting the campus. It's specifically talking about people visiting the campus and then speaking and it's not even your it's not even your words I mean they they say they want to punch. Nazis. But the reality is they define anybody who disagrees with them. It doesn't seem right. I mean that's the whole Antifa if you if you say that you want to punch Nazis pretty soon, everybody's the Nazi yet right because it's almost like a like a variant of Godwin's. A Nazi right a is becoming common closer. Not a national. Socialist. But in fact, an evil person and the old saying is that from from the Republicans they would say we think they're wrong meaning the Republicans think that the Democrats are wrong. They think we're evil. So. If you take that phrase to meet be meaningful and I, do now I think it's getting more and more that both sides I think both sides but think to the other side is evil eye both sides definitely think I'm evil. I think. With either of them. More but decreasing numbers on the right that believe that those on the left are wrong and not evil and. Fewer and probably not decreasing numbers on the left that think that the people on the right are simply evil. What are you based on? Just my own personal look at things. Okay. You know I don't have anything else to say about it but. The writers aren't Punching Nazis. Right? They're not saying it's OK to punch your Nazi punching commies either even if they do joke about throwing them out of helicopters, they don't actually do it. Yeah. I mean there are the you know the the proud boy types that are going into these zones where the lefties are protesting and I would suggest that that's looking for trouble. I don't know because the problem is if the proud boys go and have a demonstration of whatever it is, they're demonstrating for then Antifa shows up and attacks them. It happens to you know and so I don't think that was that was not a tab tactic that originated with the proud boys is something that the proud boy is due in response to what the left does. We saw that in Boston a couple of years ago. There was a free speech rally that I was not their friends of ours we're supposed to speak at and you know there was one. Guy. That had this fringe opinion or whatever, and it was supposed to be rally about different ideas and basically you know lefties of all sorts came out and they had to put up fencing around it and all kinds of stuff because somebody wanted punch Nazi and they they were they were really really flexible on what who might be a Nazi. We should do another one of those I don't want to Boston now. Yeah we should do it in Manchester. Well, you're going to find as many people in New Hampshire ready to punch Nazis but may come up from Boston for you. which is not to say that we're full of Nazi sympathizers. It's just New Hampshire seems to live for a little bit more restrictive on what they consider to be nuts. Thing, we can gave women at to Nazis. We can get into the story of it's about college students and their opinions about punching people with different opinions. But first we go to Dave in New Hampshire listening online go ahead Dave. Well, well, gentlemen, all these culture war type things are going on. Those are just distractions through a much bigger issue facing humanity right now. The question of. Artificial. General Intelligence. Okay artificial general intelligence but like what's I mean? I generally considered to mean an you know computers that are smarter than human and intuitive the same way humans are. As a programmer WHO's worked in artificial intelligence I can tell you that we're delighted if we manage to create artificial stupidity. So I think it's a long time before there's any real artificial intelligence go got. Yeah that's that's been the opinion of not just you but also Chris Wade our Friday night co host who's also a programmer and he laughs whenever somebody brings up the terms artificial intelligence. Yeah. I mean well, there there there are programs that are increasingly more complex. But at this point writing code that creates a thinking being is is is a ways off you're telling you about Dave it sounds like at least the way you described that you're talking about the idea of Oh just the words slipped out of my head. It was official general intelligence was term. He is no but there was there's another word that somebody used to describe when when the machines and Galeria it's. That's at singularity when machines get so smart or when so called a I get so smart that it can actually. Eclipse, the Human Brain I think once you have that ai wants it blinks on that the singularity. You have to know it has to you. You have to be able to prove right that it can pass the turing test or whatever other tests that it will happen very very quickly You know what's that once that thing blinks on. You better be paying close attention because it's on its own now Dave is that what you were talking about the singularity? Pretty Much Wouldn't have to be that. You can have an existential threat from the less intelligent ai than that In fact, the less intelligent. Hey I might be a bigger threat by not understanding what doing. But in any case, the there is an organization that that has been working to Kinda game this out and determine what kind of programming would result in the least danger for humanity and They're called the future of humanity institute they operate out of Oxford University and I have become despite my opposition to government funded universities has become a big fan of this institute so much so that I've donate donate the money something I never do. And Hi recently began a campaign to try to get like mid level tech leaders to do the same thing. So I'M GONNA guess. I've emailed about four five, eight firms. just stay. You know throwing that out there. Have you considered donating his speech humanity and? I haven't gotten any responses it didn't really expect to. But so just to be clear the future of Humanity Institute, they're not against artificial intelligence they're just in favor of directing it in a certain way that they believe would be the most beneficial to humans. Yeah. They understand two things in Ray. Kurzweil feels the same way I if you try to restrict it too much or maybe even if he tried to restrict it at all and ends up like the drug war, right it becomes a black market thing it's awful. And the second thing is that. Although artificial intelligence is a great threat to humanity. It's also a great hope for humanity because if it's handled correctly, it can solve all the other problems. It can prevent nuclear wars and it can prevent.

Dave Humanity Institute Boston General Intelligence America New Hampshire Ian programmer Manchester Godwin Antifa Chris Wade Kurzweil Ray official Oxford University
"humanity institute" Discussed on Progressive Talk 1350 AM

Progressive Talk 1350 AM

03:06 min | 10 months ago

"humanity institute" Discussed on Progressive Talk 1350 AM

"When those two things conflict, humanity should come first. To say that the public has and how science is done has to be an informed, say. No pitchforks and torches. This's why a movement that takes existentially risk seriously requires trustworthy, skilled trained scientists to make our say an informed one. We rely on them for that science isn't the enemy. If we abandon science, we're doomed. If we continue to take the dangers of science casually, we're doomed. The only route through the near future is to do science, right? And scientists aren't the enemy either. They have often been the ones who sounded the alarm when science was being done recklessly when a threat emerged that had been overlooked. Those physicists who decided that three and a 1,000,000 was an acceptable chance of burning off Earth's atmosphere were the same ones who figured out that there was something to be concerned with in the first place. It was microbiologist who called for a moratorium on gain of function research after the H five n one experiments. It was particle physicists who wrote papers questioning the safety of the Large Hadron Collider. If you're a scientist, start looking seriously at the consequences of your field, and if work within it poses an existential risk. Start writing papers about it. Start analyzing how it can be made safe. Take custody of the consequences of your work. The people who are dedicated to thinking about exist. Central risks are waiting for you to do that. This's Sebastian FARC, or, to a certain extent, organizations like Thie F h i. The Future of Humanity Institute. Their job is just poke the rest of the community. And by the way, this is a thing. And then for a I researchers or biology researchers to take that on and to make it their own project and the sooner And the more I can step out of that game and leave it to those communities better. Many of these solutions are already being worked on. Scientists around the world are researching large problems in raising alarms. But since we have a limited amount of time since we're racing the clock We have to make sure that we don't waste time working on risks that seem big but don't qualify as genuine existential threats and we can't tell one type from the other until we start studying them. Biggest sea change, though, has to come from society in general. We have to come together like we never have before. We have to put scientists in a position to understand existentially risks. And we have to listen to what they come back and tell us Swamp cooler Swap box desert cooler. Whatever you call evaporative Cougars, you can call him old hat. It's time to step up to modern refrigerated air with high efficiency systems. From strong built,.

scientist Future of Humanity Institute Sebastian FARC
"humanity institute" Discussed on Progressive Talk 1350 AM

Progressive Talk 1350 AM

05:09 min | 10 months ago

"humanity institute" Discussed on Progressive Talk 1350 AM

"Gloom and doom. And optimism. The gloom and doom camp makes a pretty good case for why humans won't make it through this, possibly the greatest challenge our species will ever face. There's the issue of global coordination. Kind of like mindedness that will have to create among every country in the world to successfully navigate the coming risks. Like we talked about in the last episode, we will almost certainly run into problems with global coordination. Some nations may decide that they'd be better off going it alone and continuing to pursue research and development that the rest of the world has deemed too risky. This raises all sorts of prickly questions that we may not have the wherewithal to address. Does the rest of the world agree that we should invade, Don complying countries and take over their government. In a strictly rational sense, That's the most logical thing to do. Rationally speaking, toppling a single government. Even a democratically elected one is a small price to pay to prevent an existential risk that Khun Drive humanity as a whole to permanent extinction. We humans aren't strictly rational. Something is dire is invading a country and toppling its government comes with major costs, like the deaths of the people who live in that country and widespread disruptions to their social structures. If the ship's air down would we go to such an extreme to prevent our extinction? There's also the issue of money. Money itself is not necessarily the problem. It is what funds scientific endeavours. It's what scientists are paid with money is what we will pay the future researchers who will steers away from existential risks. The future of Humanity Institute is funded by money. Problem. Money poses where exist central risks are concerned is that humanity has shown that we are willing to sell out our own best interests and the interests of others. For money in market share. Or more commonly, that were willing to stand by and let others do it. With existential risks, Greed would be a fatal flaw. Everything from the tobacco industry to the fossil fuel industry, the antifreeze industry to the infant formula industry. All of them have a history of average. Frequently and consistently putting money before wellbeing and on a massive and global scale. How can we expect change when money is just a CZ tied to the experiments and technology that carry an existential risk? Also stacked against us is the bare fact that thinking about existentially risks is really, really hard. Analyzing existential threats demands that we trace all of the possible outcomes that thread from any actually might take and look for unconsidered dangers lurking there. Requires to think about technology that hasn't even been invented yet. Look a few more moves ahead on the cosmic chessboard. Then we're typically capable of seeing to put it mildly. We're not really equipped to easily think about existentially risks at this point. We also have a history of over reliance on techno optimism that idea that technology can save us from any crisis that comes our way. Perhaps even thinking that reaching the point of technological maturity will protect us from existentially risks is nothing more than an example of techno optimism. And as we add more existentially risks to our world, the chances increase that one of them may bring about our extinction. It's easy to forget, since it's a new way of living for us, but the technology were developing is powerful enough in the world is connected enough that all it will take is one single existentially catastrophe. To permanently and humanity. If you take the accumulated risk from all of the biological experiments in the unknown number of containment labs around the globe. And you added to the accumulated risks from all of the runs and particle colliders online today and to come And you add the risks from the vast number of neural that's capable of recursive self improvement that we created. Deploy every day. When you take into account emerging technologies haven't quite made it to reality yet like Nano bots in geo engineering projects. And the many more technologies that will pose a risk that we haven't even thought of yet. When you add all of those things together. It becomes clear what a precarious spot humanity is truly in. So you can understand how a person might look it just how intractable the problem seems. And decide that our doom is complete. It just hasn't happened yet. I think we could be a bit more optimistic than that. This is.

Humanity Institute Khun Drive
"humanity institute" Discussed on Progressive Talk 1350 AM

Progressive Talk 1350 AM

03:57 min | 10 months ago

"humanity institute" Discussed on Progressive Talk 1350 AM

"Making it to a state of technological maturity for humanity. Where we have safely mastered our technology and can survive beyond the next century or two. Gloom and doom. And optimism. The gloom and doom camp makes a pretty good case for why humans won't make it through this, possibly the greatest challenge our species will ever face. There's the issue of global coordination. Kind of like mindedness that will have to create among every country in the world to successfully navigate the coming risks. Like we talked about in the last episode, we will almost certainly run into problems with global coordination. Some nations may decide that they'd be better off going it alone in continuing to pursue research and development that the rest of the world has deemed too risky. This raises all sorts of prickly questions that we may not have the wherewithal to address. Does the rest of the world agree that we should invade down complying countries take over their government. In a strictly rational sense, That's the most logical thing to do. Rationally speaking, toppling a single government. Even a democratically elected one is a small price to pay to prevent an existential risk that could drive humanity as a whole to permanent extinction. So we humans aren't strictly rational. And something is dire is invading a country and toppling its government comes with major costs, like the deaths of the people who live in that country and widespread disruptions to their social structures. If the ship's air down would we go to such an extreme to prevent our extinction? There's also the issue of money. Money itself is not necessarily the problem. It is what funds scientific endeavours. It's what scientists are paid with money is what we will pay the future researchers who will steers away from existential risks. The future of Humanity Institute is funded by money. Problem. Money poses where exist central risks are concerned is that humanity has shown that we are willing to sell out our own best interests and the interests of others. For money in market share. Or more commonly, that were willing to stand by and let others do it. And with existentially risks, greed would be a fatal flaw. Everything from the tobacco industry to the fossil fuel industry, the antifreeze industry to the infant formula industry. All of them have a history of average. Frequently and consistently putting money before wellbeing and on a massive and global scale. How can we expect change when money is just a CZ tied to the experiments and technology that carry an existential risk? Also stacked against us is the bare fact that thinking about existentially risks is really, really hard. Analyzing existential threats demands that we trace all of the possible outcomes that thread from any actually might take and look for unconsidered dangers lurking there. Requires to think about technology that hasn't even been invented yet. Look a few more moves ahead on the cosmic chessboard. Then we're typically capable of seeing to put it mildly. We're not really equipped to easily think about existentially risks at this point. We also have a history of over reliance on techno optimism that idea that technology can save us from any crisis that comes our way. Perhaps even thinking that reaching the point of technological maturity will protect us from existentially risks is nothing more than an example of techno optimism. And as we add more existentially risks to our world, the chances increase that one of them may bring about our extinction. It's.

Humanity Institute
"humanity institute" Discussed on Progressive Talk 1350 AM

Progressive Talk 1350 AM

04:58 min | 11 months ago

"humanity institute" Discussed on Progressive Talk 1350 AM

"Century or two. Gloom and doom. And optimism. The gloom and doom camp makes a pretty good case for why humans won't make it through this, possibly the greatest challenge our species will ever face. There's the issue of global coordination. Kind of like mindedness that will have to create among every country in the world to successfully navigate the coming risks. Like we talked about in the last episode, we will almost certainly run into problems with global coordination. Some nations may decide that they'd be better off going it alone and continuing to pursue research and development that the rest of the world has deemed too risky. This raises all sorts of prickly questions that we may not have the wherewithal to address. Does the rest of the world agree that we should invade down complying countries and take over their government. In a strictly rational sense, That's the most logical thing to do. Rationally speaking, toppling a single government. Even a democratically elected one is a small price to pay to prevent an existential risk that Khun Drive humanity as a whole to permanent extinction. But we humans aren't strictly rational. And something is dire is invading a country and toppling its government comes with major costs, like the deaths of the people who live in that country and widespread disruptions to their social structures. If the chips are down. Should we go to such an extreme to prevent our extinction? There's also the issue of money. Money itself is not necessarily the problem. It is what funds scientific endeavours. It's what scientists are paid with money is what we will pay the future researchers who will steers away from existential risks. The future of Humanity Institute is funded by money. The problem. Money poses where exist central risks are concerned is that humanity has shown that we are willing to sell out our own best interests and the interests of others. For money in market share. Or more commonly, that were willing to stand by and let others do it. With existentially risks, Greed would be a fatal flaw. Everything from the tobacco industry to the fossil fuel industry, the antifreeze industry to the infant formula industry. All of them have a history of average. Frequently and consistently putting money before wellbeing and on a massive and global scale. How can we expect change when money is just as tied to the experiments and technology that carry an existential risk? Also stacked against us is the bare fact that thinking about existentially risks is really, really hard analyzing existential threats. Demands that we trace all of the possible outcomes that thread from any actually might take and look for unconsidered dangers lurking there. Requires to think about technology that hasn't even been invented yet. Look a few more moves ahead on the cosmic chessboard. Then we're typically capable of seeing to put it mildly. We're not really equipped to easily think about existential risks at this point. We also have a history of over reliance on techno optimism that idea that technology can save us from any crisis that comes our way. Perhaps even thinking that reaching the point of technological majority will protect us from existentially risks is nothing more than an example of techno optimism. And as we add more existentially risks to our world, the chances increase that one of them may bring about our extinction. It's easy to forget, since it's a new way of living for us, but the technology were developing is powerful enough in the world is connected enough that all it will take is one single existentially catastrophe. To permanently and humanity if you take the accumulated risk from all of the biological experiments in the unknown number of containment labs around the globe. And you add it to the accumulated risks from all of the runs and particle colliders online today and to come And you add the risks from the vast number of neural nets capable of recursive self improvement that we created. Deploy every day. When you take into account emerging technologies haven't quite made it to reality yet like Nano bots in geo engineering projects. And the many more technologies that will pose a risk that we haven't even thought of yet. When you add all of those things together. It becomes clear what is precarious spot humanity is truly in So you can understand how a person might look at just how intractable the problem seems. And decide that our doom.

Humanity Institute Khun Drive
"humanity institute" Discussed on Serious Inquiries Only

Serious Inquiries Only

04:53 min | 2 years ago

"humanity institute" Discussed on Serious Inquiries Only

"I'm not a disingenuous critic here who's motivated to for self interested reasons. No, not at all. So I would also say quite emphatically that alternately don't care about whether or not the field. What happens with the field? The reason I would like at the moment the field to to flourish and to. Inspire eager young brilliant minds to to contribute is. I think the topics are are quite serious. And it's quite perplexing to me. That pinker would pinker seems to think that we're wasting our time, and we should close. The these various institutes organizations like the future for humanity institute or the center for the study existential risk at Oxford Cambridge, respectively because one of the best ways to ensure safe passage through the twenty th century is to have individuals thinking seriously about what could go wrong, it's consistent with what's called the pre mortem analysis where you try to figure out all of the ways that your plan could fail and then you devise strategies for avoiding those those mistakes. So it's essentially what existential risk dollars are doing. And by the way, it's he seems to have have it in for this community of researchers that community. Is tiny. It's really really small. There are not many resources that are actually being spent thinking deeply about how we could stumble into mission grape of extinction. So I think it's quite a low blow for him to to start off his response by suggesting that I'm in insincere someone who ultimately doesn't care about the truth. I care about my own self interest. And that I would self report could not be further from what is actually the case in reality. It's platitudes to say, but I genuinely care about the truth. And I think if you actually look at my history intellectual history, you'll see that that's sort of borne out because that history consists mainly of me. He be really sure that certain things are true. And then going maybe that's not true at all. Giving up some really core beliefs throughout my. Thirty six years. So yeah, anyways, I it's platitudinous. It's it's a bit like. Yeah. Undergrad to say like I care about that is my true motivation here. That's the reason I. I've been a critic of new Athena's them because I see a lot of intellectual dishonesty. I agree. And I just I can't echo this enough. There's no there's no this isn't the money. So there's no money in this things like the money is in the right enlightenment. Now, which is a book that place to people the vested interest in believing that things are great and the status quo warriors and people who wanna be able to be anti progressive. But have a kind of not a far, right? Justification for it. But like an intellectual may be Santer centre-right Santa maybe centre-left justification for it and write articles for Quillet. Like, that's where the money is. That's yes. I mean, if you want to look at the balance of things that that's that to me, that's my opinion on it. So we are vastly over time. But go ahead and put in a final word there. Yeah. The other kind of dumb thing about that statement that pinker made is the exact same idea could apply to him. You know, me pushing back against his ideas that poses. A threat to. Become this famous figure because of you know, his progressivism. And so this field of study perhaps undercuts that a little bit. So that's next essential threat. His career. Thank you. It's thank you so much for coming on given the time Phil Torres. Of course, you can just just search filters. If you want to I think, there's don't you have a what's would that be a hominem? You have there's another Phil tourists who is out there being good looking and stuff like what what was that as a nominative d'appel gang or something like that? Yeah. I'm not him by my Twitter handle is at risk. Yes. And no, no, no insult to you. You're very good looking man as well. But, but I believe if you look the other filters, he's he's some sort of Disney prince or something he's a good looking guy ever on the show. You've mentioned how good Looking. looking. The fate feel free. Everybody feel free to read into that as much as you want. I'm not eating resisted. He's he's good looking guys. It's funny. I wish I had my name's too generic Thomas..

pinker Phil Torres humanity institute Disney Oxford Cambridge Thomas Twitter Quillet Santa Thirty six years
"humanity institute" Discussed on Stuff They Don't Want You To Know Audio

Stuff They Don't Want You To Know Audio

04:29 min | 2 years ago

"humanity institute" Discussed on Stuff They Don't Want You To Know Audio

"Anywhere. You get your shows. Like to take a tickets slight pivot here and ask a couple of biographical questions. We probably should've asked in the beginning. This is this entire interview has made me very conscious of time as well. I hope we have enough time to finish. A what one thing a lot of people want to know is whether there was some specific moment in your life that inspired the end of the World Series. Was it? Was it something related to biotech? Did you like get a nasty cold? And the doctor said boy, this is weird. Josh to sit down, right? What happened? Have you been hanging out with ferrets later? I I actually was sick while I wrote up is so yeah. Which just really drove everything home that much more the thing that inspired me to do the series, which and also I want to just take this time right now to to to thank all three of you for your roles in helping me with the series, like overtime all three of you had a hand in it. And I appreciate it big time. So thank you hats off to all threes as well to kind of. Excited to hear it. It's true. I was to finally, but the whole thing started as you probably know just from this kind of intellectual curiosity about it. Like, I ran across Nick Bostrom, many years ago read some of his papers, and I just found it fascinating. I still find it fascinating. So the original point of the series was to say, hey, everybody check this out in this the coolest thing you've ever heard in your life. And as I dug into it more and more inserted actually interview the people involved like Nick Bostrom and Toby Ord and other people future of humanity institute. I realized oh, wait, this isn't this isn't just an intellectual pursuit. These people are doing like they're actually trying to warn the world like this is real like, wait. Wait, wait. Whoa. This is real and the the I underwent a conversion. And then so too did the series because I was still working on the series of the time, and there was a huge tone shift in the series. It went from straight. Eight basically, like a very dry book report to okay, we need to do something everybody in the this kind of thread of we need to form a movement. We need to start doing something emerged in the series and took on almost became like a character in the series, or certainly theme a major theme. So it was originally intellectual interests that that brought me to it. And then I kind of got struck by lightning on the way to finishing in it change, the tone big time. I'll tell you what made me wanna put my phone down and join up the movement, and it was in one of the episodes where you talk about a certain three in a million chance that occurred in the nineteen forties. Isn't that fascinating? Can you tell us a little bit of that story? Yeah. So the f- what's widely seen is the first human made eggs essential risk that we've ever faced was the the first detonation of an atomic bomb at the trinity test on July sixteenth nineteen forty five. In Alamogordo, New Mexico USA. And it wasn't that. They they were saying yes, this thing's going to be a deadly weapon. This is an existential risk a lot of people make the case in. I kinda subscribe to to that the nuclear bomb has never been an existential threat to humanity like nuclear war. I should say never actually been. 'cause we can't we probably couldn't wipe all of humanity out. And again, that's the thing that separates existential risks from all other types of risk everything else. We have the chance to rebuild. We have a chance to learn from that mistake with existential risks. There's no second chances. There's no do over one thing. One thing goes wrong. That's it for everybody. Right. And the first. Yes. So the nuclear bomb just to say this the nuclear realm was not the essential risk. Again, was not officially that part. Wasn't the existential risk. It was the detonation that conceivably posed the next center, I should say, it was the first possible human-made exit, centrist, and the reason. Was they they were sitting around the dudes at the Manhattan project. And I think it was Edward teller..

Nick Bostrom Josh Edward teller humanity institute Alamogordo Toby Ord New Mexico Manhattan USA.
"humanity institute" Discussed on Stuff They Don't Want You To Know Audio

Stuff They Don't Want You To Know Audio

04:01 min | 2 years ago

"humanity institute" Discussed on Stuff They Don't Want You To Know Audio

"And I think kind of part and parcel to us saving ourselves and saving the world right is at the same time simultaneously learning that kind of being at the top of the food chain being Dave every man who has the ability to think an act about this kind of stuff that that makes a stewards for the rest of the planet. So even if climate change is not an existential threat to humans, which it seems like it's not from taking on essential risks from taking on exit central threats. We should in my opinion, kind of change our mentality. Whether we like it or not whether we're trying to or not are the our outlook would change, and I think that things like climate change would be mitigated. And this idea that Dave every man can't do anything to help that sense of hopelessness that kind of presses all of us, you know, down into our couches and in into this funk that kind of thing will will go away. And the reason why will. Go away. The reason why we can do anything. Why? Dave every man can do anything at all is because it turns out no one at the top is doing anything. I talked to philosopher named Toby Ord. Who's one of the guys the future of humanity institute, and he has spoken to people in the highest echelons of government about this one of the things they do is just try to like warn people, including government and say, hey, you're policymaker. They're not designing a I very well right now, one of them could get out of control and take over the world. What do you think about that? What are you guys doing about that? Oh, well, you know, that's really kind of above my pay grade. I'm sure someone else's handling this. And Toby like there's nobody above your paygrade. Like, it's up to you guys. If you're not doing anything, then that means no one's doing anything. They're stuck in a cycle of elections. Right. Those things for sure. Yeah. That's that's a big part of the problem as as far as leadership goes is, you know, not just with existential risk. But basically any large project any long term thing. Thing. That's one of the things that climate change is run run up into, but it's politicized too. So it's like literally you're appealing to a particular base by choosing to say something's not really a problem where ignoring it. Sure. It's almost like a power move. Right. Say this isn't really happening. No. Because I'm in charge or I'm the smartest guy in the room. Right. And I ignore all these other people that are saying that it is. I mean, like, it's not it's not it's almost ignorance as a like a move kind of you know, willful. Yeah. Exactly. And it's a disdain for expertise to I think that's really popular thing right now. And they kind of ties into that whole death cult thing that bothers me so much. You know, it's it's like, you're scientists. I don't care, you know, get get outta my face, egghead. I I don't care about the climate. That's just kind of a sentiment just a feeling that nothing entire zeitgeist. But it's definitely a part of the guy straight now for sure it's almost like we don't have that much time. Anyway. So let's just get the most out of it that we can in the short time. We have not really worry about the. The next part. It's basically like the disco era took over the entire world. That's kind of what it feels like. So we'll pause here and continue after a word from our sponsor. Who doesn't love a good crime story. We'd like to tell you about hell and gone and new podcast from how stuff works in school of humans that follows writer and private investigator, Catherine Townsend as she moves back to the Arkansas Ozarks to solve the two thousand four murder of twenty two year old college student, Rebecca Gould, Helen, gone takes a unique approach to true crime. Because it actually puts the viewer in a situation where they're following a real time Ritter investigation as the writer in private investigator, Catherine Townsend and her team are trying to bring Rebecca's killer or killers to Justice every Wednesday. The team will explore new leads knock on doors and investigate every angle in every potential suspect until they crack this cold case and catch Rebecca's killer. So joined the search listen and subscribe on apple podcasts.

Toby Ord Dave Rebecca Gould Catherine Townsend humanity institute writer apple investigator murder Arkansas Ozarks Ritter Helen twenty two year
"humanity institute" Discussed on TechStuff

TechStuff

05:21 min | 2 years ago

"humanity institute" Discussed on TechStuff

"And so. Oh complex they will effectively emerge as their own species in nineteen Ninety-three he pen to paper, titled the age of robots and this is a quote from that piece, computer, 'less industrial machinery exhibits the behavioral flexibility of single celled organisms. Today's best computer controlled robots are like these simpler, invertebrates a thousand fold increase in computer power in this decade should make possible machines with reptile like sensory and motor competence properly. Configured such robots could do in the physical world what personal computers now do in the world of data act on our behalf as literal minded slaves. Growing computer power over the next half century will allow this reptile stage will be surpassed in stages producing robots that learn like mammals. Muddled their world like primates, and eventually reason like humans, depending on your point of view humanity will then have produced a worthy successor or transcended inherited. Limitations and transformed itself into something quite new no longer limited by the slow pace of human learning and even slower biological evolution intelligent machinery will conduct its affairs on an ever faster ever, smaller scale, until course, physical nature has been converted to fine grain purposeful thought. Now, his ideas are predicated upon the assertion that consciousness, which is a quality. That's devilish difficult to define the fact, I would argue it's just as difficult to define as the term intelligence. He argues this arises from the material that is the mind is totally the product of our nervous system. Or if you wanna be a little more generous, the combination of our nervous system and our interactions with our environment's. So in other words, consciousness emerges from a system if that system meets the physical criteria if true, and I happen to believe that this is true that it then stands to reason that if you have a sufficiently complex system with powerful enough machines, we should be able to create an artificial entity that possesses consciousness if however consciousness arises from some other scientifically undiscovered or even undiscovered Rable quality, then it wouldn't matter. How complicated we build our toys, they would never become conscious. So in other words, if consciousness were the emergence from some other thing that science cannot address like a soul. For example. Then there's no way that we could create a conscious artificial being we can't create the soul. If that is in fact, how it works. I personally feel that that's not the case that our consciousness does arise from the material that it does come from our nervous system the complexity and the electrochemical processes of our nervous system. The question. I have is whether or not we will ever be able to replicate that in an artificial system not saying that it would be impossible just wondering if we will ever figure it out. It remains an open question. Nick Bostrom who served as the director of the future of humanity institute has written extensively about trans humanism. I talked about that a second ago that idea that we transcend being just humans. Through some process. Whether that means a computer, augmented person or a biologically, augmented person isn't really important at least from this perspective. It's very important from an ethical perspective. But he's using trans human to indicate this describes someone who has moved away from what we would define as being a human being today, and like kurzweil he has hype assize that the singularity will bring along with it, some means of extending our life spans indefinitely, but he feels that some of the more aggressive predictions are a little too optimistic. He has said that he felt that there is a less than fifty percent chance that we'll have developed any sort of superhuman intelligence by the year twenty thirty three he thinks it's going to happen. But it might take a bit longer than that some of the people who believe or have formerly believed the singularity to be around the corner aren't convinced it's necessarily going to be good for us. Venture capitalist Bill joy who co-founded SUN Microsystems has expressed concerns about it, and it wouldn't necessarily take a superhuman AI to do damage to us joy has pointed out that technology tends to advance our capabilities in all sorts of areas, including destructive ones..

kurzweil Bill joy Nick Bostrom humanity institute director fifty percent
"humanity institute" Discussed on Inside the Hive with Nick Bilton

Inside the Hive with Nick Bilton

04:25 min | 2 years ago

"humanity institute" Discussed on Inside the Hive with Nick Bilton

"More sane way than Elon Musk has about the potential downfalls of robots and how they could destroy society and so on and he recently put out a paper, which I thought was so fascinating about how technologies almost all technologies. Have a good side and the bad side. And we're seeing that, of course, was social media these days with Facebook and so on, but but he talks about technologies in terms of earn. And that each time you pull technology out of out of the earn. It's like a ball and some of them are white and some of them are gray. And and the great ones, of course, are used negatively. But we've never he says pull out. A black one yet a black one that could essentially destroy civilization or most of it and his theory. Of course, is that the we haven't done that simply by luck that we have been lucky that, you know, the the Cold War, for example, we didn't bomb each other because the speed with which took for Russia to send a nuke over. Here would have been a few minutes in given enough time to send one over there. And before you know, it there's no more Russia. There's no more America. And maybe everyone else in between. And so it became it actually had the adverse effect where it kind of calmed things created the stalemate where we knew we couldn't win and they knew they couldn't win. So it didn't happen. But that as we start to look at a I and as it does start to become more intelligent and the barrier to entry does become easier. One of the things of boss says, of course, that that you know, the. Thing we talked about earlier where anyone may be able to create a I until it what to do in the in the future that we will one day maybe pull out one of these black balls from the earn. And and that could be it. Do you think that Bostrom is right to say this or is he kind of looking at it from this pessimistic point of view that we're eventually just going to screw this whole thing up. Well, I definitely think he's looking at it from a pessimistic perspective. But I think what he says is is fundamentally possibly could happen. I mean, his focus in terms of artificial intelligence is on again on building AGI, right? An artificial general intelligence, something that would be at the level of a human being in terms of its ability to think in conceive ideas and the assumption on the part of almost everyone thinks about this is that once you reach that stage almost instantly it becomes smarter than human beings. So now, we've got a superintendence we've got something that that might you know. The difference between this entity and us might be the same as the difference between us and insect might just badly beyond us. And at that point. How do we control it a number of issues with that the other thing that comes up is that there is a competition going on to to advance day? I. It is possible to whoever gets there. I would have essentially, you know, an uncatchable advantage right because because it would build on it self accelerate based on what is come before. So essentially, whoever gets it is on catchable city, God this dramatic advantage. So you don't see what you saw with nuclear weapons where the two sides kind of offset each other. And there was this mutual swept Shen. I mean, most you might you might have it situation where one side if they can control this super intelligent. I've got this overwhelming advantage. So that's kind of scary as well. So they're real issues there. I personally again, I think that's pretty far in the future. I would say that by the time we get to the point where that's a real concern. We're going to learn a lot that we don't know yet. So we may be in a bit better position to control it. But I'm definitely in favor of investigating. His issues are number of think tank that have been set up Nick Bostrom. The future of humanity institute is one of them Alon musk is funded open, which is also working on his issue. How do you build an intelligent machine that is controllable in fence that that will do what we wanted to do and won't do things that that harm us?.

Nick Bostrom Elon Musk Russia Facebook Alon musk humanity institute America one day
"humanity institute" Discussed on Inside the Hive with Nick Bilton

Inside the Hive with Nick Bilton

04:25 min | 2 years ago

"humanity institute" Discussed on Inside the Hive with Nick Bilton

"More sane way than Elon Musk has about the potential downfalls of robots and how they could destroy society and so on and he recently put out a paper, which I thought was so fascinating about how technologies almost all technologies. Have a good side and the bad side. And we're seeing that, of course, was social media these days with Facebook and so on, but but he talks about technologies in terms of earn. And that each time you pull technology out of out of the earn. It's like a ball and some of them are white and some of them are gray. And and the great ones, of course, are used negatively. But we've never he says pull out. A black one yet a black one that could essentially destroy civilization or most of it and his theory. Of course, is that the we haven't done that simply by luck that we have been lucky that, you know, the the Cold War, for example, we didn't bomb each other because the speed with which took for Russia to send a nuke over. Here would have been a few minutes in given enough time to send one over there. And before you know, it there's no more Russia. There's no more America. And maybe everyone else in between. And so it became it actually had the adverse effect where it kind of calmed things created the stalemate where we knew we couldn't win and they knew they couldn't win. So it didn't happen. But that as we start to look at a I and as it does start to become more intelligent and the barrier to entry does become easier. One of the things of boss says, of course, that that you know, the. Thing we talked about earlier where anyone may be able to create a I until it what to do in the in the future that we will one day maybe pull out one of these black balls from the earn. And and that could be it. Do you think that Bostrom is right to say this or is he kind of looking at it from this pessimistic point of view that we're eventually just going to screw this whole thing up. Well, I definitely think he's looking at it from a pessimistic perspective. But I think what he says is is fundamentally possibly could happen. I mean, his focus in terms of artificial intelligence is on again on building AGI, right? An artificial general intelligence, something that would be at the level of a human being in terms of its ability to think in conceive ideas and the assumption on the part of almost everyone thinks about this is that once you reach that stage almost instantly it becomes smarter than human beings. So now, we've got a superintendence we've got something that that might you know. The difference between this entity and us might be the same as the difference between us and insect might just badly beyond us. And at that point. How do we control it a number of issues with that the other thing that comes up is that there is a competition going on to to advance day? I. It is possible to whoever gets there. I would have essentially, you know, an uncatchable advantage right because because it would build on it self accelerate based on what is come before. So essentially, whoever gets it is on catchable city, God this dramatic advantage. So you don't see what you saw with nuclear weapons where the two sides kind of offset each other. And there was this mutual swept Shen. I mean, most you might you might have it situation where one side if they can control this super intelligent. I've got this overwhelming advantage. So that's kind of scary as well. So they're real issues there. I personally again, I think that's pretty far in the future. I would say that by the time we get to the point where that's a real concern. We're going to learn a lot that we don't know yet. So we may be in a bit better position to control it. But I'm definitely in favor of investigating. His issues are number of think tank that have been set up Nick Bostrom. The future of humanity institute is one of them Alon musk is funded open, which is also working on his issue. How do you build an intelligent machine that is controllable in fence that that will do what we wanted to do and won't do things that that harm us?.

Nick Bostrom Elon Musk Russia Facebook Alon musk humanity institute America one day
"humanity institute" Discussed on The End of the World with Josh Clark

The End of the World with Josh Clark

03:50 min | 2 years ago

"humanity institute" Discussed on The End of the World with Josh Clark

"Com slash end of the world or text end of the world to five hundred five hundred we know, so you're going to get a chance to do something. That. It's probably about here that you should meet Nick Bostrom he chimed in earlier. But what I mean to say is that you should know more about him as his work forms a lot of the basis of this series in Oxford England, there is a university among the world's oldest where people have been teaching since at least ten ninety six nearly a thousand years and housed in a three story. Tim brick administration building called little gate house is the future of humanity institute. The F HI was founded by Nick Bostrom, who as I said is a philosopher, and it is a center where people from a wide array of disciplines come together to consider the ways that humanity could accidentally wipe itself out in the near future. And also how to prevent that. And also what we might do with ourselves. If we're able to negotiate the very tricky near future and actually survive into the far future. The great many of the ideas in the series came from those collaborations that arose FA. HI what Nick Bostrom mostly thinks about our existential risks. Existential risks are threats to life that have consequences so sweeping so utterly catastrophic. It should one of them befall us. It would spell the end of humankind. No more humans. And if it turns out that we are the only intelligent life in the universe. No, more intelligent life, anywhere at all. What makes existential threat so dangerous in addition to the catastrophe? They bring is that they are unlike any other type of risk where used to encountering. With virtually every other type of threat posed to humans. We can reasonably expect that enough of us will be left alive to continue. Our species should one befall us ticket disastrous changing climate, for example, imagine that a couple of decades from now we humans are caught totally offguard by sudden shift in the global climate far more pronounced an abrupt than the warning signs were currently experiencing a rapid rise in sea levels. Drowns coastal towns around the world, sending huge populations of people inland, which puts an enormous strain on the city's that absorbed them at the same time massive droughts and floods breakout in virtually every food producing region of the world, the ecological collapse leads to social collapse food supplies to adult water supplies, become salty, and untold number of people begin to die more than ever have in human history. Even more killed in wars that break out over the precious resources that remain in just a handful of decades. The entire human race is reduced from ten billion to just one hundred million people living in scattered settlements across the globe. As categorically awful is such an experience would be it would not spell the end of humans. Even with just one percent of the population left alive. We could reasonably expect that one hundred million people living across the world would be enough to carry the human race along and eventually to rebuild to be sure. We would be setback substantially all of the progress that we made as a global civilization would be pushed back thousands of years almost a square one almost there's a substantial difference between the perhaps fateful series of events that led to the discovery of something like smelting iron and carbon into steel and having people who remember learning that if you add carbon Diane, you can make steel or that there's such a thing as coffee or that you can make wine from grapes. And if you spend a.

Nick Bostrom FA Tim brick humanity institute Oxford England thousand years one percent
"humanity institute" Discussed on The World Transformed

The World Transformed

03:19 min | 2 years ago

"humanity institute" Discussed on The World Transformed

"I'm filled our master and all this week. I've been talking with entrepreneur and futurist. Nate grunted. How're you doing Nate? Doing really well, Phil. Well, it's great to have you back for a Friday show. You know, we usually kind of cut loose a little bit on Friday. We're going to geek out here at the end, but I thought before then as promised on Wednesday, we're going to talk about the future of everything. So I hope you're ready. I'm ready. Got your future is shoes on, and let's let's start pick a topic. What what futures thing would you like to talk about? First? Let's try aliens aliens. You know, the great thing about aliens is they're not even necessarily futuristic, of course, as you know, they built the pyramids so they've been around for quite some time and ancient structures on Mars, even long long before that we did a show just a few weeks ago talking about the future of humanity institute at Oxford University has put out a report and people who've. On the show for Andrew Sandberg was one of the contributors to chaotic Drexler was one of the contributors to time about the fact that it's perfectly plausible that were alone in the universe. They've run the numbers and they've done the math and they're not saying there's no aliens. They're saying there could be knowing lanes, and we contrast that with a story that said, there's enough good evidence now on the UFO side that we should really be seriously scientifically investigating UFO. So right. I mean, the truth is out there in lies somewhere between those two positions right that there's probably. There's no good reason to believe there's aliens and even the like crazy alien theories, right? Have enough evidence behind them that they could be scientifically exact investigated. So where do you, where do you stand on homeadvisor Dan, will you know, is actually really interesting. I was in a cab ride the other day and my cat. You know, Uber driver was talking that she had an alien encounter and I think there are some. Yeah, I know. I think they're all these like YouTube videos going around. I've never seen one, but apparently they're super popular talking about how you know they're lizard people and alien counters happen all the time. So apparently this is, you know, in commedia now it's zeitgeist to have this conversation and people really believing they have alien counters. Now, I don't know. You know, I think reality subjective and. I've never had an alien counters. So who am I say? But yeah, it's, it's also really interesting seeing, you know the the news reports about, you know, these fighter pilots having these weird encounters with objects that they don't understand. So it's interesting when you hear it from the navy or the airforce or when it's a major astronomer who has seen something that's weird or an astronaut when when you when you hear it from some random person you, you know, if you're Uber driver tells you there's aliens every that's one thing, right? It's one and of course Lord knows they encounter enough people that they, you know, they, they've seen some stuff but, but when you hear it from a presumably authoritative source, it takes on a whole different feel to it. Exactly at that point. And one of the things that I said, Stephen last time we talked about this is the fact that there might be weird. Visual phenomena occurring in the sky is still a big leap to there are extraterrestrial civil..

Nate YouTube Andrew Sandberg Stephen Phil navy Oxford University Drexler Lord Dan
"humanity institute" Discussed on Waking Up with Sam Harris

Waking Up with Sam Harris

01:46 min | 3 years ago

"humanity institute" Discussed on Waking Up with Sam Harris

"It was a good conversation and so now i bring you robin hanson ladies and gentlemen welcome sam well thank you all for coming out really it's it's amazing to see well or see some fraction i'm gonna jump right into this we have a very interesting conversation the head of us because i have a great guest he is a professor of economics at george mason university he's also a research associate with the future humanity institute which you might know folks on existential risk and the big topics ethical importance here's a phd in social science from caltech a master's in physics and the philosophy of science he did nine years of research with lockheed and nasa studying mostly artificial intelligence and also basin statistics and he's recognized for his contributions in economics and especially in prediction markets but he's he's made contributions in many other fields and he has written a fascinating book which unfortunately is not for sale here today but you should all by this book because it was really it's amazingly accessible and he'd just touches so many interesting topics that book is the elephant in the brain hidden motives and everyday life please welcome robin hanson.

george mason university research associate future humanity institute lockheed robin hanson professor of economics nasa nine years
"humanity institute" Discussed on Global News Podcast

Global News Podcast

02:02 min | 3 years ago

"humanity institute" Discussed on Global News Podcast

"He quickly than quantum computing really could be a breakthrough in bought a but that sets up a unique challenge for humanity how will we stay in control when the machines we've built assume much cleverer than we are and as sandberg is from the future of humanity institute at oxford university the deep at problem here is that intelligence if we defined that us at the ability to solve a problem really well is very in a disconnect that trump solving the right problem so we really need to think about safety and how beneficial artificial intelligence essays but this is of was a tremendously deep properly and typically we don't know what wrong is until we see it we do have to be very careful about the way in which we use these extremely powerful machines particularly when what they're doing exceeds our own understanding they may well produce results that are incredibly a track have to us so minimizing accidents better diagnoses new drugs and so forth but if we don't understand the ways in which they are doing that we will not be able to control them in the way that we are going to need to if we are going to live safely with intelligent machines in the central to come and my report by tom field on the french president emmanuel macron's called for the traditional against to be included in the list of the world's most important cultural symbols last year the un cultural organisation unesco decided to add neapolitan pizza to the list and the french want to show that that bread is world class to he's cowfield reports from paris millon those who break the first budget both at 4 new peruvian soldiers so that they could have a lome remove that was more easy to carry on a march was that the fancy comestible in truth later from vienna what was it labor laws of the 1920s that made it popular baker.

quantum computing sandberg oxford university tom field emmanuel macron vienna artificial intelligence french president paris
"humanity institute" Discussed on The Guardian's Science Weekly

The Guardian's Science Weekly

01:41 min | 4 years ago

"humanity institute" Discussed on The Guardian's Science Weekly

"This people stuff started to understand that if even with lower tech ai topping now right if these are really important questions to have the goals of the machine in line with yours and have it understand you a if you tell your selfdriving taxis to take it heathrow as fast as possible you don't wanna get their covered in vomit than chased by helicopters and save over eur that's what i meant and the car replies the is an exact be worth you asked for you wanted to also understand the implicit things also want that that you didn't specify which normal human would understand because it hasn't brought her fame the reference rate so there's a lot of research on this now stood russell set up this new center in berkeley precisely studying these things the machine intelligence research institute in berkeley and the future of humanity institute in oxford center for until agents in cambridge all that he kidding efforts to to uh the tackling these questions and uh there's been some some promising work but we need much more resources dedicated to it you know right now i said we have to win the wisdom race between the power of ai in the wisdom well more than ninety nine percent of the investments in its are just going into making more powerful and there is almost zero funding for this kind of research but whether all not wisdom ole paolo wind south eventually another outcome max explored in detail in the book involves a 'yes humans and in tick elected colonisation one of the reasons i wanted to write this book as i wanted to write a more optimistic book.

russell berkeley humanity institute oxford center cambridge ninety nine percent
"humanity institute" Discussed on KQED Radio

KQED Radio

02:09 min | 4 years ago

"humanity institute" Discussed on KQED Radio

"Now the fears around the development of artificial intelligence computers super intelligence is a long way from the stuff of scifi movies but several high profile leaders and thinkers have been worrying quite publicly about what they see as the risks to come are economics correspondent paul soloman exp floors that it's part of his weekly series making sense it's the greatest scientific history building a i artificial intelligence do you think i might his style is not up to the mice tang some version of this scenario as had prominent tech luminaries in scientists worried for years in two thousand fourteen cosmology stephen hawking told the bbc it is good spell on race and just this week tesla an spacex entrepreneur ilan must told the national governors association as a fundamental accessorised for clearance and i fully appreciate that okay but what's the economic angle well at oxford university's future of humanity institute founding director nick bostrom leads a team trying to figure out how to best invest well the future of humanity we are in this very peculiar scituate jason of looking back at the history species hundred thousand years old and now finding ourselves just before the threshold to what looks like it will be this transition to some posts human error off super intolerance that can colonist universe than than the last business a fierce philosopher bostrom has been perhaps the most prominent thinker about the benefits and dangerous humanity of what he called super intelligence for many years once three super talibans the faith of humanity may depend on what the super intelligence stuff there are plenty of ways to invest in humanity he says giving money to anti disease charities for example but bonds from thinks longer term about investing.

artificial intelligence bbc tesla ilan national governors association oxford university humanity institute bostrom paul soloman stephen hawking spacex founding director nick bostrom hundred thousand years