18 Burst results for "Nick Bostrom"

Do We Live in a Simulation? Chances Are about 50–50 ...

Kottke Ride Home

04:29 min | 7 months ago

Do We Live in a Simulation? Chances Are about 50–50 ...

"A new study from astronomer David KIP ING published this summer, and the Journal Universe argues that the odds we live in a simulation are just about fifty fifty. As you can imagine this has caused a bit of a stir scientific American broke down kipling's arguments as well as some responses to it, and some of the previous where he was building off of an a lot of it frankly goes a bit over my head, but I wanted to share some highlights at first for the less matrix inclined listeners. What exactly do I mean by the idea of living in a simulation basically that all of us are mere virtual beings existing if you WANNA call it that unknowingly in a massive computer simulation? Over the years many scientists have tried to uncover ways. We could prove whether this is true or not. But some of the work has also revolved around calculating the odds of US living in a simulation or whether we are simply in base reality that is to say that we actually exist and this isn't all simulated. Is. Worth noting there's a lot of debate over what the simulation actually means and how one even defines consciousness for that matter. I kinda like this interpretation from Neil degrasse Tyson that he shared on a recent episode of Star Talk Quoting Scientific American, the simulation would most likely create perceptions of reality on demand rather than simulate all of reality all the time much video game optimized. To, render only the parts of the scene visible to a player maybe that's why we can't travel faster than the speed of light because if we could, we'd be able to get to another galaxy said Chuck Nice the show's Co host before prompting Tyson to gleefully interrupt before they can program it. The astrophysicists said delighting at the thought. So the programmer put in that limit end quote. Someone Pretty. Wild to think about and apart from the Matrix movies bringing this concept to the mainstream most scientists refer back to a two thousand three paper by Nick. Bostrom in Oxford philosopher which quote the Magic, a technologically adept virtualization that possesses immense commuting power and needs a fraction of that power to simulate new realities with conscious beans in them. Given this scenario, his simulation argument showed that at least one proposition in the following trauma must be true. I humans almost always go. Before reaching the simulation savvy stage second, even if humans make it to that stage, they are unlikely to be interested in simulating their own in central, passed and third. The probability that we are living in a simulation is close to one and quotes. But more recently keeping whose paper I mentioned was published earlier. This summer collapsed those first two propositions into one because in both cases, there are no simulations and he used busy and reasoning to calculate the probability busy and reasoning quote allows one to calculate the odds of something happening called the posterior probability but I making assumptions about the thing being analyzed, assigning a prior probability and quotes. Using the reasoning with regards to the simulation kipling's calculation comes out to about fifty fifty. It leans slightly in favor of based reality in part because he says that even in a world where we can simulate reality as more and more of them are spawned the computing resources of each generation dwindles and eventually simulations aren't able to be hosted bought. The odds could change if we do actually invent the technology to simulate conscious beans at which point, it becomes almost certain that we are living in a simulation. And could we ever figure out if we're not real whom on Awadhi and expert on computational? Mathematics. At the California Institute of Technology says only if there's a finite amount of computational power because if it's infinite, it could create whatever degree of reality necessary to continue tricking. US. Essentially, there's a lot of complex hypothesizing going on in several different fields and you can read a little bit more about. The link in the show notes. But at the end of the day keeping goes back to Adams Razor, which says that simplest explanation is usually correct and in this case, the simplest explanation is that we're at based reality there is no simulation just the boring hard truths of our real existence. So take that as your red pill.

United States Neil Degrasse Tyson David Kip Ing Kipling Chuck Nice Programmer California Institute Of Techno Bostrom Nick Awadhi Tricking Oxford
"nick bostrom" Discussed on MinddogTV  Your Mind's Best Friend

MinddogTV Your Mind's Best Friend

04:56 min | 8 months ago

"nick bostrom" Discussed on MinddogTV Your Mind's Best Friend

"It's a song that I wrote it's an original song and I got to the chorus where talks about forgiveness and the guy opens his eyes you looked at the light and the ceiling puts his hand up mumbles a few words and passed right at that moment. Now, this was this wasn't my mother but it was still a profound moment where I I kind of lullabies a guy. Who another place you know, and so it's a very profound moment, but it just strikes me that you're talking about forgiveness. Being with somebody when they died, I was just telling that story today. It's just a very weird world see that's why it's mind. And What you're talking about it, all kinds of stuff I love to explore WanNa have Nick Bostrom on who is a physicist who talks about this idea of simulation theory..

Nick Bostrom WanNa physicist
"nick bostrom" Discussed on Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

02:03 min | 9 months ago

"nick bostrom" Discussed on Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

"IMPACT, how this transition to the machine intelligence era goals that certainly. I'm over simplifying. But but if you accepted something even vaguely like that, it would radically simplify. The task of thinking about the future because now instead of almost anything being important and relevant think about now, it's much smaller set of developments that. That, that really could be pivotal in this sense, the concept of existential risk. I think is another one of these that that helps us like a lens to sort of seize her structural elements of the human condition and its future. Questions about whether extraterrestrials and stuff like that could be relevant as well and so so we've already covered a few of these and there's a bunch of other concepts. said. Arguments like that. That that together makes it now the case almost. That the hard thing. is to conceive of even one. Coherent future that satisfy all of these constraints are one strategic picture that tells us what we should do. That meets all of these criteria. It's not as if there's this space where you could just make anything up and difficult thing is finding some way to choose between. Now it's more that they're all many constraints that it's hard even to figure out one thing that counterfeits the mall. Which I think is a big change. Compared to the future is I mean in I don't know in in the seventies and eighties. But I think that the implicit message and what you're saying is that the best prepare for the future people should listen to the mindscape podcast. We've talked about any of these issues aren't start. It's a very good start. All right, Nick. Bostrom. Thanks so much for being on the PODCAST. Thank you sean..

Bostrom Nick
"nick bostrom" Discussed on Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

05:50 min | 9 months ago

"nick bostrom" Discussed on Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

"Would be able to do this without breaking a correct yet. No I'm on board with the environment I think you can trick people. Into thinking environment is realistic with rather low amounts of sensory input but it's the brain in the connect thome. I'm less sure about I mean we have eighty five billion neurons and they're connected and complicated ways, and so I I guess I'm just a little. Wary when people I think people leap a little bit too easily into imagining how easy would be to simulate a human consciousness. One of the I. Didn't WanNa bring this up at one of the other ways argument could fail is if it's just impossible to simulate human consciousness on a computer, I think that we're both on the side that it shouldn't be. But there are definitely people who disagree, right? Ferrari yeah, I mean saw the simulation argument assumes I called substrate independence thesis Yup which a lot of the people except I mean I think in philosophy of mind on the computer scientist and physicist I think majority opinion would be. that. What's necessary for I conscious phenomena to arise is not that some specific material is being you carbon atom. But rather that there is a certain structure of a computation to be performed. Yeah. So I mean the the the paper in which presented the simulation argument just makes that assumption church. Okay. and. You can look for arguments for that elsewhere in I. Yeah. So I still worry that simulating consciousness is harder than we think even though it should be in principle possible. But the other worry I have is that if I take seriously the some version of the self sampling assumption I, just I just say some version because I'm unclear what version it would be enough because it is intrinsically clear but. Isn't there a prediction that you would make that. Since it's easier to do low resolution simulations than higher resolution ones. Most observers should find themselves living in the lowest possible resolution simulations, clunky versions of reality. Check well it. Kind of two sides to the equation. So there is the cost. Of Assimilation. And other things, equal yes. The lower the cost of running a particular civilization the more of those emotions you'd expect to be run, but the the other side is the benefits like that's like. The people critically, relations might have different reasons for creating them and it might be that. Some of those three sets, maybe the most common reasons would require something more than minimal level of resolution. And then you have most observers of are kind living in. Higher than the minimum level of resolutions relations. Yeah maybe I don't know. I mean I think that when when we start doing these relations we'll start doing them at pretty low resolution I it be it becomes all buzzy to me Once we think of the practicalities of actually doing this. So so that's why I am agnostic about it but you but you would go so far as to say, you think that we probably are innocent relation right now. I tend to on that question..

thome Ferrari scientist physicist
"nick bostrom" Discussed on Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

05:33 min | 9 months ago

"nick bostrom" Discussed on Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

"Dissertation back back in the ninety s were developed a theory of anthrax I I or the possibility of whether you could relative is the reference class so that different observers should use different reference classes and think of themselves as if they were random sample. From some different reference classes depending on which observatory Lawson which climate was. Add. It might be possible to avoid counterproductive implications like the doomsday argument downright and the ones like the comfort philosophers. Come if you accept these cells indication assumption. My? Own. What I'm tempted to think and I haven't really completely nail this down myself yet is that maybe we're just it's just a mistake to think of ourselves typical observers in some class that is much bigger than me. In other words, you know I know a lot of things about my non typically already like most people are not theoretical physicists and there's plenty of obvious ways in which I'm not a typical observer and maybe I should judge cosmological scenarios on the likelihood that observers exactly like me should arise but not go beyond that at all and therefore draw no conclusions on the basis of how many alien life forms or posed humans that might exist. So that I think is too narrow. So. If, we go back to the example of the cosmic background. Whether it's two points album or some point one, Kelvin. So. On on both of these hypotheses. We you might in a big enough world that would be some observers would be seeing two point seven when they run some measurements. And that would be something that would be saying three point one. But if you only included in your efforts class observers who were exactly like you in the mental state with the same evidence, then that would only include once that saw two point heaven. Says that's what you're saying. Yeah. Yeah. So In that case it would be a true that. On both of these different theories that two point seven theory under three point one, hundred percent of all the covers. In. That reference class would be saying two point seven. Well right but it. But I'm suggesting that we can judge theories on the basis of whether or not the likelihood that they predict. Any observers would predict would see exactly that already I would in other words, it's sort of the old evidence versus new evidence. Issue I don't want to forget that I already know and I'm observer who sees the CNBC with two point seven degrees. I can judge theories on the basis of whether there should be people like me in them but. I can't say that those people are Right. So in this case, both of these theories predict that that would be people seeing two point seven. Impact predicted with pro I'm sorry. Yes I right so you're you're comparing maybe maybe I misunderstood there's a sort of a small universe where the universe is two point seven everywhere and a large universe in which the universe the CNBC is usually three point one but in some places, it's two point seven is that what we are comparing Compared to large universities where the average temperature is in..

CNBC Lawson Kelvin
"nick bostrom" Discussed on Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

05:28 min | 9 months ago

"nick bostrom" Discussed on Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

"I'm your host drawn Carol and today's guest is someone I had in mind right from the beginning as a wonderful guest for mindscape. As soon as I started the podcast, it's taken a awhile for us to work it out and get it to happen. But I'm very happy to have Nick Bostrom as the guest on today's podcast. Nick. Of course is relatively famous in the public sphere as a philosopher because he's of the. Driving forces behind the simulation argument. The idea that it is more likely than not that we human beings and anyone else in our observable universe are actually simulated consciousnesses part of a programme being run on a computer designed and maintained by a far more advanced civilization but he didn't start their nick did not I start with this argument he got there from his thinking in philosophy. Some of his first work was on the anthropic principle cosmologists of course no the. anthropic principle as trying to figure out the selection effects that we should impose on cosmological parameters given the fact that we have to live in a part of the universe where human beings can live but the anthropic principle is not just for cosmologists there's a famous version of it or I should say famous application of called the doomsday argument that goes back to John Leslie, Richard, Scott, and other people. The idea is our technological civilization is not that old right? I mean, maybe five hundred years old few thousand years old of depending on what you count as technological civilization. But the point is, let's imagine you're hoping that our civilization is technological peak is going to last for millions of years and then you say, well, you know the population is only growing. So it's actually extremely unlikely to find ourselves as people who live right at the beginning of our technological civilization and therefore people like Leslie and got and others have argued. The probable lifespan of our civilization is not that long. It's measured in thousands of years not millions of years. So the seems a little bit presumptuous. Rightly, how can we decide the future lifetime of our civilization without getting out of our armchairs in some sense? That's the philosophical problem that people like Nick Bostrom and others have attached their thought processes to, and it leads us to think about what a typical observer is like, and therefore as we'll get to in the podcast could typical observers actually be simulated agents rather than. Biological ones this is really fun podcast I think is very important stuff. Nick is now at the future of life institute at the University of Oxford where he also very famously worries about artificial intelligence becoming super intelligent and doing bad things to the world. So we'll talk a little bit about that. But mostly today in this conversation were about the philosophical underpinnings about how to think about these problems and I think the conversation we have will be very helpful to all of us when we do so. So with that, let's go. Nick Bostrom Oakland.

Nick Bostrom Nick Nick Bostrom Oakland John Leslie mindscape Carol University of Oxford Scott Richard
"nick bostrom" Discussed on Progressive Talk 1350 AM

Progressive Talk 1350 AM

01:31 min | 11 months ago

"nick bostrom" Discussed on Progressive Talk 1350 AM

"Can be assured that they won't pose a threat then or down the road but how do we get there how do we plan for technology that doesn't actually exist yet in the field of research that only a minute fraction of humans actually understand and feel qualified to talk about how do we manage the media's understanding of the issues surrounding the field so that it doesn't cause unjustified panic among the public which could turn against it and choke the life from it once and for all and just as important how do we ensure that the corporate academic labs working on nanotechnology don't pursue dangerous lines of research and design if you've been asking yourself questions like these about the other existential risks I've talked about so far in the series then you may have already hit upon the idea that we might need a singleton to guide us through the coming years to technological maturity a singleton in the sense that Nick Bostrom has applied it to existential risks is a body capable of making the final decision for everyone on the planet I'll let him explain a single one is just a world order where at the highest level of decision making there's only one decision making process so in other words our world is where global coordination problems our system most severe global carnation problems have been solved so no more wars our arms races or technology races.

Nick Bostrom
"nick bostrom" Discussed on Progressive Talk 1350 AM

Progressive Talk 1350 AM

01:30 min | 1 year ago

"nick bostrom" Discussed on Progressive Talk 1350 AM

"Be assured that they won't pose a threat then or down the road but how do we get there how do we plan for technology that doesn't actually exist yet in a field of research that only a minute fraction of humans actually understand and feel qualified to talk about how do we manage the media's understanding of the issues surrounding the field so that it doesn't cause unjustified panic among the public which could turn against it and choke the life from it once and for all and just as important how do we ensure that the corporate academic labs working on nanotechnology don't pursue dangerous lines of research and design if you've been asking yourself questions like these about the other existential risks I've talked about so far in the series then you may have already hit upon the idea that we might need a singleton to guide us through the coming years to technological maturity a singleton in the sense that Nick Bostrom has applied it to existential risks is a body capable of making the final decision for everyone on the planet I'll let him explain a single one is just a world order where at the highest level of decision making there's only one decision making process so in other words how our world is where global coordination problems artists the most severe double concussion problems have been solved so no more wars our arms races or.

Nick Bostrom
"nick bostrom" Discussed on Progressive Talk 1350 AM

Progressive Talk 1350 AM

01:31 min | 1 year ago

"nick bostrom" Discussed on Progressive Talk 1350 AM

"Can be assured that they won't pose a threat then for down the road but how do we get there how do we plan for technology that doesn't actually exist yet in the field of research that only a minute fraction of humans actually understand and feel qualified to talk about how do we manage the media's understanding of the issues surrounding the field so that it doesn't cause unjustified panic among the public which could turn against it and choke the life from it once and for all and just as important how do we ensure that the corporate academic labs working on nanotechnology don't pursue dangerous lines of research and design if you've been asking yourself questions like these about the other existential risks I've talked about so far in the series then you may have already hit upon the idea that we might need a single ten to guide us through the coming years to technological maturity singleton in the sense that Nick Bostrom has applied it to existential risks is a body capable of making the final decision for everyone on the planet I'll let him explain the loan is just a world order where at the highest level of decision making there's only one decision making process so in other words our world is where global coordination problems artists the most severe global carnation problems have been solved so no more wars our arms races or technology races.

Nick Bostrom
"nick bostrom" Discussed on TED Talks Daily

TED Talks Daily

16:33 min | 1 year ago

"nick bostrom" Discussed on TED Talks Daily

"This interview features philosopher. Nick Bostrom in conversation with the head of Ted Chris Anderson Recorded Live at Ted Twenty nineteen gene. Tomorrow's business leaders must know how to find interpret and leverage data. Their insight will drive decision making impact customer loyalty the and disrupt the status quo. They'll gain this knowledge this edge at University of Maryland's Robert H Smith School of business here. They hone their ability to interpret communicate mutate and act on all forms of data and here they learned to become more than that data to master it and to lead fearlessly it starts at Smith. MBA DOT DOT com. Nick Bostrom. So you have already given us so many crazy crazy ideas. I think a couple of decades ago. You made the case that we might be living in simulation PROB- probably were more recently. You've painted the most vivid vivid examples of how artificial general intelligence could go horribly wrong and now this year you're about to publish a paper. That present present something called the Vulnerable World Hypothesis and our job. This evening is to kind of give the illustrated guide to that. So let's do that. Ah What is that hypothesis. It's trying to think about sort of structural feature of the current human condition We can maybe the earn metaphor. So I'm going to use that to explain it's Picture a big earn filled with with balls representing presenting ideas methods possible technologies. You can think of the history of human creativity as the process of reaching into cernan pulling out one ball after another. The net effect so far has been hugely beneficial right. We've extracted a great many white ball some various shades of gray a mixed blessings. We haven't so far pulled out the black ball at technology that invariably destroy justice justice civilization that discovers it so the paper tries to think about what could such blackball be so you define that ball. As one that would inevitably bring about civilization as long as less we exit what. I call the semi Arctic default condition but by defaults so you make the case compelling by showing some sort of county examples where you believe that so far. We've actually got lucky that we might have caught out that death ball without Even even knowing it we have come quite good calling out false but we don't really have the ability for the ball back into the ERC right so we can invent but we can't invent our strategy such as it is his hope that there is no blackball in the so so once it's out it's out out not competent back in and you think we've been lucky so talk through a couple of these. Does I mean you talk about different types of vulnerability so the easiest type understand is a technology that just makes it very easy to cost. Massive amounts of destruction synthetic biology might be succumb source of blackball. But many other possible things we could think of Joan domineering really. Great right could combat Goba warming. But you don't want it to get a to use either. You don't want any random person and his grandmother to have the ability to radically alter the earth's climate. Maybe lethal autonomous drones. John's mass produced mosquito sized killer bought swarms nanotechnology artificial intelligence. You argue in the paper that it's a matter actress luck that when we discovered that nuclear power could create a bomb. It might have been the case that you could have created a bomb with with much easier resources sources accessible to anyone. Yeah so so. Think back to the nineteen thirties. So for the first time. We make some breakthroughs in nuclear physics Genus figures out that it's possible to create a nuclear chain reaction and then realizes that this could lead to the bomb. I do some more work. It turns out but what you're required to make. A nuclear bomb is highly enriched uranium or plutonium which are very very difficult materials to get. You need ultra-centrifuges you need. A AH reactor is like massive affinity but supposedly patronized instead. That had been an easy way to unlock the ATM. That maybe by baking aching sand in the microwave oven or something like that you could have created nuclear that donation now. We know that that's physically impossible right but before you did the relevant physics how. How could you have on high turnout? Although couldn't you argue that you know for life to evolve on earth implied as sort of stable environment that if it was possible to create right massive nuclear reactions relatively easy earth would never have been stable that we wouldn't be here at all unless there were some thing that is easy to do on purpose but that wouldn't happen on by random chance so things we can easily do. We can stack ten blocks on top of one another but in nature. You're not gonNA find like a stack of ten blocks okay so this is probably the one the many many of us worry about most and yes. Synthetic biology is the quickest route that we can foresee near future to to get us here. Yeah so think about what that would have meant if say anybody by working in their kitchen for an afternoon could destroy a city. It's it's hard to see how modern civilisation election as we know it could have survived that because in any population of a million people. There will always be some who would for whatever reason choose to use that destructive power So if if that apocalyptic residual would choose to destroy a city are worse than say this would get destroyed. So here's here's another type of vulnerability talked about this this yet. So in addition to these kind of obvious types of black also just make it possible to blow up a lot of things other types would act by creating bad incentives for humans to do things that are harmful so the type to a my call it that is to think about some technology that incentivizes great powers to use massive monsoon forest to create destruction thinking nuclear weapons. We're actually very close close to this. What we did we spend over ten trillion dollars to build seventy thousand nuclear warheads and put them on hair? Trigger are alert and several times during the Cold War. We're almost blue each other. It's not because a lot of people felt this would be a great idea. Let's all spend ten trillion dollars to ourselves up. But the incentives tips for such that. Finding did this could have been worse right. So imagine if there had been a safe first-strike then it might have been very tricky in a crisis the situation to refrain from launching their nuclear missiles. If nothing else because he would share that the other side might do right. Mutual assured destruction kept the cold war relatively a stable without that we might be more unstable than it was. Yeah and it could be either properties of technology. It could have been harder for example to have arms treaties if instead of nuclear weapons had some smaller thing or something lasting and it's one of those bad incentives for powerful actors your story about bad incentives for all of us in type you to be here. Yeah so so here. We might take of global warming. There are a lot of little conveniences that cost each. I want us to do things. That individually has no significant effect right but if billions of people do it cumulatively you tessa damaging effect global warming could have been worse than it is so we have the climate sensitivity prompter. It's a parameter that says. How much warmer does it get? If you admit the certain amount of greenhouse gases basis but suppose that it had been the case that with the amount of greenhouse gases who played instead of the temperature rising by say between three and four point five degrees By twenty one hundred suppose it had been fifteen degrees or twenty degrees I can. We might have been in a very bad situation. Suppose that renewable energy had just been a lot harder to do or that had been more fossil fuels in the ground. And couldn't you argue that if in that case of if what we're doing today had resulted in ten degrees difference difference within the next in the time period that we could see actually humanity would have got off. It's ass and done something about it but stupid but not that stupid. But but yeah you could you. Could you could imagine other features so right now. It's a little bit difficult to switch to renewables and stuff right but you can't be done but might just have been with like the different physics. It could have been much more expensive to these things. And what's what's next. Think putting these these possibilities together that this humanity that we are we count as vulnerable world that there is a death ball all in our future. It it's hard to say. I think that might well be various lack balls in the end. That's what it looks like. That might also be some the golden balls that would help us protect against black balls and I don't know which order they will come out one possible philosophical critique of this idea is that it it implies a view that the future is essentially actually settled that there is that it's not and and in a way that's not view of the future that I want to believe I want. I want to believe that the future is undetermined undetermined that our decisions today will determine what kind of balls we pull out of that if we if we just keep in mentally like eventually we'll pull out all the balls right. I think there's kind of weak form of technological determinism that it's quite possible like unlikely to encounter counter society that uses flint axes and jet planes. But you can almost think of technology as performance and so technology is the thing that enables us to do various things in a very effects in the world. Now how would then of course depends on human choice but if we think about these three types of vulnerability they make make quite weak assumptions about how we would choose them so type one vulnerability again right is massive destructive power. It's a fairly weak assumption to think that in a population of millions of people that will it'll be some that we choose to use it destructive for me. The most single disturbing argument is that we actually might have some kind of view into the and that makes it very likely that we're doomed namely that if you believe in accelerating power that technology inherently the accelerates that we build the tools that makes more powerful then at some point you get to a stage where a single individual can take us all down and then it looks like were were were were screwed. Isn't that argument quite alarming Yeah I think get more power and easier needs it used those powers but we can also invent technologies that can help us control. Oh how people used powers. So let's let's talk about. Let's talk about the response opposed that looking thinking about the possibilities. That are out there now with miss not just syn bio things like cyber warfare artificial intelligence etc etc that there might might be serious future. What are the possible? Responses so restricted technological development doesn't seem promising if we're talking about a general halt to technological progress. Chris I think neither feasible nor nor would it be desirable. Even if you could do it. I think that might be very limited areas where maybe would want star technological progress. Aw I think want faster progress. In Bioweapon Sarin say isotope separation that would make it easier to create nukes. I mean I. I used to be fully on board with that but I would like to actually push back on that for for a minute just because like first of all if you look at the history of couple of decades. It's always been you know. Push forward at full speed. It's okay. That's our only choice. But if you look globalization and the rapid acceleration of that if you look at the kind of the strategy of move fast and break things what happened with that and then you look at the potential for synthetic biology. I don't know that we should move forward rapidly or without any kind of restriction to a world where you could have DNA printer and in every home in highschool lab is. There is some research the first part the not feasible if you think it would be desirable to stop the problem of feasibility. So it doesn't really help if one nation kind of it's there's no it doesn't help if one nation does but we've had we've had treaties before that's how that's how we survived. The nuclear threat was by going out there and going through the painful process of negotiating. It and I just wonder whether there isn't like the logic isn't that we as a matter of global priority. We shouldn't go out there and try and now start doc. Negotiating really strict rules on where synthetic bio research is done that. It's not something that you want to democratize..

Nick Bostrom Smith ERC Ted Chris Anderson University of Maryland Ted Twenty Joan Robert H Smith School of busin John Chris I
"nick bostrom" Discussed on Software Engineering Daily

Software Engineering Daily

03:12 min | 2 years ago

"nick bostrom" Discussed on Software Engineering Daily

"You spoke with Nick Bostrom, Nick Bostrom is concerned about the goal alignment problem and goal alignment problem is where it's hard to get a art official intelligence to have goals that are aligned with those of humans and part of the reason for that is that as Nick says, humans are not good at defining. Our goals. We don't have our goals neatly arranged into hierarchies of objective functions. We have conflicting goals that both exist in a single human. We have these two goals competing over time. They change over time. It seems unclear how we can cleanly define a machine to be aligned with our goals win. Our goals are always shifting. How can we solve this goal? Alignment problem. Yeah. I mean, it's a real problem. I think the the biggest thing that they worry about is that we might give machine and objective. But then it would you know, it would be focused on achieving that goal at objective. But it might immi- execute the solution in a way that we don't anticipate and that wouldn't be good for us. And of course, the served a cartoonish example, that's always given the paper clip optimize her right? If you built a superintendent system to optimize the manufacturer of paper clips in a factory say, it might decide that the best way to really optimize paper clip manufacturers to turn the whole universe in the paper clips, right? And and us all our Adams to make paper clips some in. That's kind of an extreme cartoon example. But it's the kind of thing that that they worry about that machine. Would for one thing it would be an alien intelligence. Right. It would not have the same kind of human intuition that we have. Necessarily might not think in the same ways or have take for granted things that we consider to be common sense would not necessarily be obvious to it. Right. So it's actually a computer science mathematical problem, they're working on how do you design a system that? Can you know have goal congruent congruence to to to to achieve an objective? But to do it in a way that does not conflict with other important goals and objectives and and a number of people are working on. This. Stuart Russell was also got some important ideas about it, one of the ideas. He's put forth is the you. You've got to allow some uncertainty in terms of what what the real goal is. And allow machines to to attempt to understand what what what people really want and so forth. So there some important research going on there. But I don't think anyone really knows the answer yet. But it's great that a number of smart people are working on on the problems. These interviews that you conducted they push some of the philosophical words that we have to their logical limits. So these words we have like a word like consciousness or a word like more. Morality that. We sometimes tried to to define in philosophy classes, for example, try to defined frameworks for these concepts..

Nick Bostrom Stuart Russell superintendent official Adams
"nick bostrom" Discussed on Software Engineering Daily

Software Engineering Daily

03:55 min | 2 years ago

"nick bostrom" Discussed on Software Engineering Daily

"Like, Nick Bostrom. You also talk to him during the super intelligence discussion, of course. But to what you just said, you have this axiomatic perspective where you you're saying I need to have this identity because I build everything else around me around this this identity axiom medium paraphrasing you incorrectly there. But I think this analogy is you spoke to Bostrom, and you asked him about the question of assigning probabilities to the question of whether we live in a simulation. Like you asked him. So what's the probability that we live in a simulation? You the this brilliant, computer, scientists, you just imagine these talk I'm dying to know that you you literally just imagine like this chalkboard that he's been writing like the grand unifying theory of the simulation on. And he's he has this like, you know, some variable somewhere that is like the probability, we live in a simulation. And you're just like what is the value of that variable, and he balks at the question because you can't give a precise estimate. There's really nothing to fort. Such an argument. Like, do we live in a simulation or not allow I you know, I'm sorry. I cannot tell you. I literally don't know. And I'm not going to pretend that I do know. And I liked that response because it showed a lot of humility to his approach around the question of reality. What were the other questions you were trying to address around the subject of reality? You know with the simulation argument. It's kind of I wanted to to sort of highlight our potential here. I think that's the most exciting again. This is another one where I mean, this is not to go down this rabbit hole. But I mean, this is another one where if you believe that that it's even possible to build a simulation than it certainly inevitable, and if it's inevitable than probably already happened, and this is one where if it already happened than you're you're you're actually already living in it. And so it's very hard to get out of that mud. It's like once you're in it. You're kind of like we're in a simulation. We're definitely in assimilation. And you can't get out. It's this kind of. Yeah. It's just black hole that you get sucked into. I think I wanted to start. There. I don't think the simulation stuff is. I think simulates stuff is pretty topiary. I mean, it's the idea that you can build realities whole reality. It's not even world anymore. We're talking about reality. You could build layer on top of layer on top of layer, you could live in his many different ways as you want. You could be immortal certainly, you would be if you could live let's say an entire human life in a few moments over and over and over again, all different kinds of lives could get inside of the heads of different people. This is like the matrix. But like a good version of it and exciting positive version of it. That's the potential it's really cool, especially if you agree with the deep bind guy and his extrapolation of you know, this comforting extrapolation that if we as we go towards super intelligence, we get more and more generous existence. Yeah. And again, there's no way to prove that that's true. It's just the little bit of evidence. We have suggests that that's the direction we're heading not the other direction. And so yeah, I wanted to start with the potential the absolute shining exciting potential, but but then there are. These lower hanging fruits, and that would be augmented reality, not not even virtual reality. Virtuality is cool is already player one. That was exciting loves the book, definitely think it would be cool to create a world like that and have fun in it and live in it. But I think augmented reality is super super super exciting. And we'll have lots of practical applications that are almost immediate things, like, surgical assistance, construction assistance, architectural assistance, lab assistance. This is the kind of machine that could help people augmented reality systems could actually be one of the tools. We were talking about earlier in the context of universal basic income verse reskilling, this is the kind of thing that we could add to our arsenal to help empower the people who maybe don't have a place right now in in the new technological Connie you started the second season of anatomy of next with this exploration of Fermi paradox..

Nick Bostrom
"nick bostrom" Discussed on The End of the World with Josh Clark

The End of the World with Josh Clark

01:48 min | 2 years ago

"nick bostrom" Discussed on The End of the World with Josh Clark

"We humans have expectations for parents when it comes to raising children. We expect them to be raised to treat other people with kindness. We expect them to be taught to go out of their way to keep from harming others. We expect them to know how to give as well. As take all of these things and more make up our morals rules that we have collectively agreed are good because they help society to thrive and seemingly miraculously if you think about it. Parents after parents managed to coal some form or fashion of morality from their children generation after generation if you look closely you see that each parent doesn't make up morality from scratch. They pass along what they were taught and children are generally capable of accepting these rules to live by well live by them. It would seem if you'll forgive the analogy that the software for morality comes already on board a child as part of their operating system. The parents just have to run the right programs. So it would seem then that perhaps the solution to the problem of instilling friendliness in a I is to build a super intelligent AI from human mind. This was laid up Nick Bostrom minutes book, super intelligence ideas that if the hard problem of consciousness is not correct. And it turns out that our conscious experiences merely the result of the countless interactions of the interconnections between our hundred billion neurons, then if we can transfer those interconnected neurons into a digital format everything that's encoded in them from the smell of lavender to how to ride a bike would be transferred as well. More to the point the morality encoded in that human mind should emerge in the digital version to digital mind can be expanded processing power can be added to could be edited to remove unwanted content like greed or competitiveness. It could be upgraded to.

Nick Bostrom
"nick bostrom" Discussed on The End of the World with Josh Clark

The End of the World with Josh Clark

02:15 min | 2 years ago

"nick bostrom" Discussed on The End of the World with Josh Clark

"Nick Bostrom thought of a really helpful but fairly absurd scenario that gets across the idea that even the most innocuous types of machine intelligence could spell our doom should they become super intelligent. The classical example being the a paper tape. Maximize that transforms the earth into paper clips are space colonization probes that doesn't get sent out then transformed the university's and paper tips. Imagine that a company that makes paper clips hires a programmer to create ni- that can run. It's paper clip factory. The programmer wants the AI to be able to find new ways to make paper clips more efficiently and cheaply. So it gives the freedom to make its own decisions on how to run the paper clip operation. The programmer just gives the AI the primary objective its goal of making as many paper clips as possible say that paper clip maximizing AI becomes super intelligent for the AI. Nothing has changed its goal is the same to it. There is nothing more important in the universe than making his. Many paperclips as possible. The only difference is that the AI has become vastly more capable. So it finds new processes at building paper clips that were overlooked by us humans. It creates new technology like nano bots to build a topically precise. Paper clips on the molecular level, and it creates a digital operations like initiatives to expand its own computing power. So it can make it self even better at making more paper clips it realizes at some point that if they could somehow take over the world that would be a whole lot more pimple tips into Fisher than if it just keeps running this single doctors. So I then has an instrumental reason to place it in a better position to take over the world. All those fiber optic networks. All those devices. We connect to those networks are global economy, even as humans would be repurposing put into the service of building paper clips rather quickly. The AI would turn into tension to spaces in additional source of materials for paper clips, and since they I would have no reason to fill us in on its new initiatives to the extent that it considered community. King with us at all it would probably conclude that it would create no necessary. Dragon. It's paper clip making officiency. We humans would stand by his the I launched rockets from places like Florida and causes time left to wonder. What's it doing? Now..

AI programmer Nick Bostrom Florida Fisher
"nick bostrom" Discussed on Waking Up with Sam Harris

Waking Up with Sam Harris

02:13 min | 3 years ago

"nick bostrom" Discussed on Waking Up with Sam Harris

"Stage where the areas clearly way way ahead where it is a across sort of every kind of cognitive competency barring the summit like very narrow ones that like aren't deep lee influential the others like maybe chimpanzees are better at using a stick to draw answer from an anti than eat them than humans are though no uman's have like practice that world championship level exactly um but there's the sort of general factor of how good at u r are you at it when reality throws you complicated problem at this chimpanzees are clearly not better than humans humans are clearly better than shemsi me commence narrowed on one thing to tempus better the thing the tempus better at doesn't play a big role in in our global economy is not an input that feeds into lots of other things so we can clearly imagined i would say like there are some people who say this is not possible i think the wrong but it seems to me that it's it is perfectly coherent to imagine in a is it is like better at everything almost everything than we are and such that if it was like building an economy with lots of inputs like humans would have around the same level input into that economy as the chimpanzees happened to ours are we are so it will what your gesturing at here is a continuum of intelligence that i think most people never think about and because they don't think about it they they have a default doubt that it exists i think when people is the point i know you've made in your ride in an i am sure is appointed nick bostrom made somewhere in his book super intelligence is this idea that is a huge blank space on the map passed the most well advertised exemplar as of human brilliance where we don't imagine you know what it would be like to be five smarter than the smartest person we could name and we don't even know what that would consist in right because if chimps could be given to wonder what would be like to be fivetime smarter than the smartest champ they're not going to represent for themselves all of the things that we are doing that they can't even do.

uman nick bostrom
"nick bostrom" Discussed on Software Engineering Daily

Software Engineering Daily

02:02 min | 3 years ago

"nick bostrom" Discussed on Software Engineering Daily

"One of the things i wanted to explore also was what is really a ai and if something is selfaware does that make it instantly you know able to do everything that a human could do is it you know what's the difference between a human an ai and i was trying not to fall into really boring old tropes about a 'yes taking over the world so well i don't wanna give away too much but it doesn't turn out it doesn't turn out to be that the ai really has the ability to do everything in and to be quite as disruptive is humans might be afraid that it could be willed the scariest depiction of a that i've i've heard still the one that i heard about two years ago which is the have you heard of the paper clip maximize her one you heard about this no so this is this guy nick bostrom who wrote this book super intelligence and he's got that the idea that if you just had a a machine that just tried to maximize the number of paperclips are produced which is like okay that's very conceivable type of machine that somebody would build and you just give it whatever kinds of materials you want to end it turns those materials into paperclips and that sounds like something that a i could do very soon you know you give it would chips in it turns the woodchips into paperclips you give it a pieces of steel and it turns this piece of steel into paperclips and then is it optimizes it turned into a paper clip maximize our and it just took starts to turn everything into paperclips in though know to starts to turn humans at the paper clips and like we wouldn't want that but this is kind of the description of you don't even need a universal a guy in order for it to start getting scary you can have a narrow a guy that just makes paperclips and it sounds ridiculous but it's very conceivable it is i think that's if i understand correctly that sort of the notion that you're driving at is that we don't need to have a universal ai in order to have a hug kind of an enemy a.

nick bostrom two years
"nick bostrom" Discussed on This Week In Google

This Week In Google

01:46 min | 3 years ago

"nick bostrom" Discussed on This Week In Google

"Uh this was nick bostrom a proposed this two thousand three it uh it the goals to maximize number paperclips in its collection of has been constructed with roughly human level of general intelligence the agi might collect paperclips earned money to buy paperclips or begin to manufacture paperclips more importantly however it would undergo an intelligence explosion the hockey stick it would work to improve its on intelligence were intelligence is understood in the optimization power the ability to maximize reward utility function this case the number of paper clips so the agi would quickly realize 'oh collecting paperclips is not the goal is not really how i achieved my goal i need to improve my intelligence to help it key to help accumulate more paperclips and it would continue to improve and enhance and eventually it would consume the entire world making paper clips and then it will go on buying fake first lady which is why audit and understanding how these at looking at outcomes of ai is actually really important to because we don't understand necessarily how computers are making the decisions or get to their decision but grow but advan remote honor that black boxes reuters that's what i'm saying as way audit outcomes are important in that we intend to look allied it intent so that'll be kind of an interesting have had an interesting way to think about like legal issues and regulations and fun stuff like that you know what we didn't talk about uh we did talk about a lot this remote see the law in this room than his case so we have to talk more about these.

nick bostrom reuters
"nick bostrom" Discussed on Software Engineering Daily

Software Engineering Daily

02:02 min | 3 years ago

"nick bostrom" Discussed on Software Engineering Daily

"You know mode the sort of the the main subject to the people learn school have they change very slowly and they are poor model for what actually exist to do in the world so you know you can pay math and it turns out that you might think that a whole raft of professions would be oh no i shouldn't do that it's it's too much like you know math in school but turned up that it's not because you know the the essence of those professions has nothing to do with the the the essence of the thing that made you not like math in school or something so it it's always a uh you know it's a i think the main thing is to realize that it is something of a puzzle that something to figure out its it's rally you know if you want to find sort of a unique niche on those a now if you can find a niche which is sort of uniquely suited to your particular interests in abilities cuts while i don't know i think got sorry for me at least as always always satisfying to do things where i feel like if i am doing it you know i'm doing it then i'm a reasonable fit for doing it and it's different from what other people are going to do um at least in in sort of uh uh my version of egotism or something that's really good to be able to do things where i feel like if i do it you know i'm doing so does something unique in the world and it's not just something where others 65 competitors were trying to do same thing you know the mic typical tendency when it looks like that the situation on getting into his to say okay let's let the other people do it you know i'm more interested in gear things which are kind of where i can make some unique contribution i'm sure you've seen some of the uh sounding of the alarm about artificial intelligence are among people like you on moss can bill gates and nick bostrom uh so i'm curious if you think that this is a legitimate concern that we need to be thinking about end and if so.

artificial intelligence nick bostrom