12 Burst results for "Thomas Weapons Systems"

"thomas weapons systems" Discussed on On Point with Tom Ashbrook | Podcasts

On Point with Tom Ashbrook | Podcasts

08:52 min | 1 year ago

"thomas weapons systems" Discussed on On Point with Tom Ashbrook | Podcasts

"Called. The airplane was ability to really link airplanes with ground tanks in this case through a radio so it was the combinations of those technologies that gave a significant advantage. I think that's true. Going forward not only artificial intelligence but also autonomy robotics. Well we have to take a quick break. We'll come back. We're going to hit this question of ethical. Use of ai. In the military so standby everyone. This is on point. This point meghna chakrabarti today. We're talking about the coming future of an aa. I way of war. And i'm joined today by general john murray he's first commander of the. Us army futures command. Patrick tucker joins us as well. He's technology editor defense one. So now we are landing at that question of how do you do. I warfare ethically given the immense power that automated systems and a integrated military's could potentially have. So let's listen to what the united states secretary of defense. Lloyd austin said in a speech at the global emerging technology summit earlier this very month. He vowed that while the us would compete to win in the field of it would not quote cut corners on safety security or ethics or development deployment and use of ai must always be responsible. Equitable traceable reliable ungovernable. And we're gonna use a for clearly defined purposes and we're not going to put up with unintended bias from a. We're going to watch out for unintended consequences and we're going to immediately adjust improve or even disabled. Ai systems that that aren't behaving the way that we intend. Well we asked gilman. Louie you heard from him a little bit ago from the national security commission on artificial intelligence about we asked him to comment about how to achieve that ethical. Use of a so that the systems are not used as the secretary of defense in unintended ways. And here's a little bit of what gilman louie told us. Human responsibility cannot be passed to a machine for humans. Make the decision of when did it employs systems when not to deploy these systems where the boundaries in which these systems are designed to operate in all of those should meet the highest standards of ethical considerations is currently framed in the ihl and and other forms of a bilateral and multilateral agreements that we have around the world well joining us. now is heather roth. She's a philosopher and political scientists. She wrote the department of defense. Is ai ethics principles. She's also a fellow at the brookings institution and an associate fellow at cambridge university center for the future of intelligence and she joins us from phoenix. Arizona they're off welcome to you. Hi magna thanks for having me. I wonder if i if i could ask you if we are already seeing evidence of a lack of ethical restraint in the use of ai on the battlefield. And i'm thinking about what patrick had said earlier that the russians actually field tested systems in places like ukraine and syria in an actual hot seen with real lives and real people at at stake. That's already happened. So is this question of international agreements on the ethical abuse of a. I already moot. I wouldn't say that it's mood. The international community's been debating the question of at least autonomous weapons systems since two thousand fifteen twenty fourteen. Two thousand fifteen. The united states is department of defense came out with its own policy on autonomous weapons systems in two thousand twelve which unfortunately generate must disagree with. You does not state that human will always be in the loop debts dod policy directive. Three thousand dot o. Nine but these these questions of what. The international norm setting standard is is still developing And i should note that autonomy a i tend to be used in her changeable. And i think we might want to keep some of those discussions slightly separate given the breadth of ai applications. So we stop you there other. Because i appreciate the specificity here. And i want to have the most relevant conversation possible. So what term should we be using. Should we be using autonomy in the context of this conversation or whether or not what ethical questions you want to raise right. So if we're talking about the hot wars and the pointy end of the stick. You're going to be talking about physical systems and non-physical systems in the cyber around. Those can be a enabled. But if you're ready to talk about ai. Systems at our back office or have to do with personnel or with different types of decision may aids and things of that nature those have kind of longer term longer tail effects. That are still important. But they're not the hot worse off that everybody kind of puts their attention to but they still have wide ranging ethical implications for morale and for the way we fight in the way we train. Okay point taken now. You had just also said something which want to get a little bit more from you and then get the general to respond. Because this does seem to be a central precept in how we might achieve ethical uses of that a human decision maker would always be in the loop. Because can you just clarify a little bit. What is the policy the policy right now. So the policy is three thousand nine. And that's the policy around autonomous weapons systems and semi autonomous weapons systems within the dod. That policy outlines the types of permissions required to seek a major acquisitions defense program And there's some special permissions that need to be in place to to move forward on on acquiring autonomous weapons systems. It does not however state that a human needs to be in the loop for those. The by definition autonomous weapons system is a system that that can selected engage targets without intervention by a human operator. The question if you want to think of it as a humans always wear will. That's true but if you're thinking about the very action of selecting in engaging targets humans aren't going to be in those decisions Or those i wouldn't call them decisions at that at that level. But they're not gonna be in that in that tight find hunt fix tracked engage targets but there will be a human commander stating this is the type of you know targeting plan that i have these are the effects that i want to achieve these the munitions that i have so if you think of this as an entire command command and control chain then of course there's stevens there but in that tighter loop The no thomas weapons systems going to have a human in that right. And i mean the tighter loop by definition is one of the advantages of right so so so general. Marie would you like to respond to that. I appreciate the the correction on the policy. Although i must also slightly disagree right so i think it goes back to the discussion we had earlier. You know whether you're talking in. I'm sure she'll to this too. But whether you're making a lethal decision or non lethal decision. And and i just you know so. We're working on things right now. That does a lot of what she's talking about in terms of target identification target selection But at the point that the trigger gets pulled on a lethal decision. We're working towards having a human in the loop. And i and i think that's that goes back to the ethical piece of artificial intelligence and so i mean the policy may be exactly that she states but that's not the direction army's going as we we intend to when it when it comes time for lethal decision in pulling the trigger as she said we're working towards artificial intelligence does everything short of that and then allows human to make that final decision and i'll just my experience growing up in the army. You know i'll take you inside. A tank on the governor's seat. Princeton is we trained our gunners using flash cards..

meghna chakrabarti general john murray Patrick tucker Lloyd austin national security commission o gilman louie heather roth cambridge university center fo department of defense united states Us army gilman Louie brookings institution syria ukraine phoenix patrick Arizona aids
"thomas weapons systems" Discussed on Artificial Intelligence (AI Podcast) with Lex Fridman

Artificial Intelligence (AI Podcast) with Lex Fridman

03:54 min | 1 year ago

"thomas weapons systems" Discussed on Artificial Intelligence (AI Podcast) with Lex Fridman

"It's all systems go for gain of function at this point. Which i find very troubling. Now i'm a little bit of an outsider from this field but it has echoes of the same kind of problem. I see in the ai. World will the thomas weapon systems. Nobody in my colleagues my colleagues friends. As far as i can tell people in the community are not really talking about autonomous weapons systems as now you us in china four steam ahead on the development of both rent and that seems to be a similar kind of thing gain function i've You know have friends in the biology space and they don't wanna talk about gain of function publicly. It and i don't that makes me very uncomfortable from an outside perspective again function. It makes me very uncomfortable from the insider perspective on the town was weapons systems. I'm not sure how to communicate exactly about autonomous weapons systems certainly don't know how to communicate effectively gain of function. What is the right path. Forward here should we sees all geena function research is that is that really the solution here. Well again i'm going to use gain of function in the relatively narrow context. Yes you could say almost anything that you do to make biology. More effective is gain of functions within the narrow confines of what we're discussing i think it would be easy. Enough for level-headed headed people in all of the country's level high governmental people in all the countries that realistically could support such a program to agree. We don't want this to happen because all labs leak. I mean and you know an example that i use actually didn't use in the said it was sam harris as well. is the anthrax attacks in the united states in two thousand and one. I mean talk about an example of the least likely lab leaking into the least likely place. was shortly after nine eleven. Folks don't remember it and it was a very very lethal strand of anthrax that as it turned out based on forensic genomic work that was done and so forth absolutely leaked from a high-security. Us army lab probably the one in fort dietrich maryland it might have been another one but who cares. It absolutely leaked from high security. Us army lab. And where did it lead to this. Highly dangerous substance that was kept under lock and key by a very security minded organization. Well leaked to places including the senate majority leader's office. Tom daschle says they was senator delay. He's office certain publications including bizarrely. The national enquirer. But let's go to. The senate majority leader's office. It is hard to imagine a more security minded country than the united states. Two weeks after the eleven attack. I mean it doesn't get more security minded than that and it's also hard to imagine a more security capable organization than the united states military. We can joke all. We want about inefficiencies in the military. And you know twenty four thousand dollar wrenches and so forth but pretty capable when it comes to that despite that level of focus and concern and competence just days after the nine eleven attacks. Something comes from the inside of our military industrial complex ends up in the office. Someone who believes senate majority leader somewhere in the line of presidential succession tells us everything in league so again. Think of a level headed conversation between powerful leaders in diversity of countries thinking through like i can imagine a very simple powerpoint revealing just discussing briefly things like the anthrax leak Things like This foot and mouth disease outbreak that or leaking the came out of a four level lab in uk several other things talking about the other violence that could result from gain of function..

Tom daschle china one two thousand twenty four thousand dollar fort dietrich maryland uk united states after nine eleven sam harris both rent nine eleven attacks several other things Two weeks after four level thomas eleven attack senate
"thomas weapons systems" Discussed on Human Factor Security

Human Factor Security

05:58 min | 3 years ago

"thomas weapons systems" Discussed on Human Factor Security

"Uh-huh. Hi, everyone. Welcome to this latest episode of the human factor podcast. I am completely Flosse's blown away by the guest on the show this time as she now. I guess to introduce themselves, but this I think I really have to introduce August formally, and I have I say your second name, right? Today's guest is Dr Lidia cosstalk less is that right? That's right. Acts topless the stuff. Liz on just missing this by guys, so Lydia, can I'm gonna kill you. Lazy at consults on the intersection of people strategy technology education national security. She addressed the United Nations Member States on the military flex panel. At the convention of set weapons group of governmental experts, the missing on lethal Thomas weapons systems, formerly director of strategic engage into the college of inflammation inside. Face at the National Defense University. Principal consult the PI in higher education, professor teaching national security itself. Universities professional experts on three continents several counties multicultural environments. She speaks and writes, the shift of technology. Convergence innovation tack ethics on national security lapses at the National Defense University. The joint special operations university is a member of the I E E USA policy. Committee participates in nascent signs of peace and security program on joined the bomb ministration to seek the US. Presidential volunteer. Serviceable for pro Bono inside the security in strays Wednesday. I in ethics. She is hit on reflection ought series, which is hush tag. Also by a game about summation technology ethics codes, Sapien two point zero on even has your own Clinton line. Welcome to the show on we are not wear. Dr Lidia cosstalk. Thank you. I mean, gosh, I shorten that. Bio Disley was it's such an honest of you to come on the podcast, and thank you so much for your time. I'm sure that all listeners role ready kind of wasting to see what we can talk about because there's just so many areas that you wear called. I mean, if you teach could say in short way, what would you really say that you work on because the semiarid is there are some areas inside to different people, I say different things. And so what whatever I think will interest audience more pick back heart out and leave out other parts. So yeah, let's go back a little bit in. How did you sorta? I got into this whole area mean what was your career path off until you start to do this very high level of national defense in a consultancy. How did you get there? Well, I think I was very influenced by the happening. September eleventh, and that terrorist attack made me feel compelled to study security studies, and that led beyond a path into my graduate studies where I looked at peace and diplomacy and conflict, and then after that, I did my PHD on security policy again looking at counterterrorism. And then I noticed that tears organizations were also using the internet, and they were also using cyberspace. And so I got into cybersecurity and cyber warfare looking at those different aspects. And then one thing led to another, and I stayed in the area of technology national security, and then as a new technologies evolved and became a part of warfare. I looked at those as well. And I continue to find it all very fascinating in that. We're living in very tremendously amazing times because technology's moving faster than we can even wrap our heads around. So it's pretty fast these days. Yeah. Because I saw the quote that set just done he said. Anew sad that we are not a quick yet to really understand. I think he said the moral ethical emotional Beden that the pace of technology is is sort of. Full on his Rayleigh. I mean, can you X times on a little bit and tells a bit more about what you mean? Yes. Absolutely. So that was why I started the game saving two point. Oh, which can also be found online in a mobile friendly format. It's Sapien two dash zero dot com. And idea was that we have all these emerging technologies, but we don't have any policy regulation around on any of them. And I am not interested in saying how they should be regulated. But I am interested in there being open debate and discussion around them and for that to happen, we need awareness. And so I created this kind of reflectionsal of question game in looking at -nology that affect humanity from birth all the way to death. And when when I talk about the emotional burden that we are putting on two generations ahead of us is if we don't sit down and grapple through what it means to DNA edit our future offspring what it means to have Boll branch relation where. We copy our brains, and they live on in digital space after our bodies, go all of these kind of things changed how we know our human body and change also some elements of inequality as well. There's discussions about how people who are well off will be able to be better off health wise, too because they're gonna be able to DNA edit their babies in a way to get rid of certain diseases that less well-off people won't be able to do in terms of whole brain emulation were blip mind uploading there's many people who we probably wouldn't want to

National Defense University Dr Lidia cosstalk Flosse Sapien US United Nations inflammation Rayleigh Disley Liz Boll Principal Lydia I E E USA professor Clinton director
"thomas weapons systems" Discussed on WNYC 93.9 FM

WNYC 93.9 FM

03:29 min | 3 years ago

"thomas weapons systems" Discussed on WNYC 93.9 FM

"So when we think about robots in. What exactly is out there right now that's being used by the military. Well, there's not a lot being used apart from remote control devices for doing things like bomb disposal, but on ships and around even on land this set of devices that should things dine for instance, the iron dome in Israel, which should Stein missiles, there are a number of these on all British battleships and most other battleships in the western world are warships will carry something like the links if you being swarm attacked by a large number of people you switch it on. And then it will take care of the enemy itself. It will. It uses sensors to detect the incoming so they switch it on. And and then they switch it off afterwards that that's the kind of technology is actually being deployed at the point. The MSCI is always looking at kind of out there visions of future soldiers and using robot for them, what kind of things are being considered the honest surly available now, but could come down the line quite seeing. Well, not available night. But the Sunni being developed rapidly, the ideas that you have weapons that wants to have been launched have no human control so called the Thomas weapon systems and my concern and the concern of the campaign, I'm involved with is the critical functions of target selection by machine unplugging, violent force. So the idea is that the machine will be able to select the song targets you'll delegate the decision to kill a human to machine now that to me is against human dignity. People shouldn't get the idea that this is some sort of big Terminator robot with machine gun. These are like conventional weapons the US of the x forty seven b for instance, which is a fighter jet that can take off from London aircraft carriers. They have an atomic submarine. They have a large ship to the company's Thomas submarines Russia have tanks. They're developing the t four are multi four tank really advanced and they're talking about making a Thomas's quickly. Possible Kalashnikov for works on a gum that will do it's one targeting through machine learning China Welland to develop shipper sonic air to air combat fighter. This is really of great concern because the US started it, and they were thinking of a robot going alongside a soldier, and you could control that nice nice arms race going on. And everybody's talking about swarms of robots flying together coordinating with each other communicating with each other in selecting targets say what needs to be done about that right now, do you think we need to get the UN to put a new prohibitive international treaty that prohibits the use of them or we could have a treaty that will simply create a positive obligation for meaningful human control of weapons we've been campaigning at the UN for five years. It's called the campaign to stop killer robots and under over seventy NGOs, big ones, Human Rights, Watch Nobel women's and they should have found the international those. Grips. We're getting the idea that weapons need to have a human there to check the legitimacy of all targets for every attack. But the appetite for rebels to do our violent bidding extends way beyond company. Specializing in weaponry. Even people who are building harmless robots for the entertainment industry are being drawn into the killer. Robots face.

US UN Thomas Israel China Welland Russia London five years
"thomas weapons systems" Discussed on KFI AM 640

KFI AM 640

12:10 min | 3 years ago

"thomas weapons systems" Discussed on KFI AM 640

"And welcome back to coast to coast, George Noory with you along with Olaf growth who is the co author with markets. Berg of Solomon's code humanity in the world of thinking machines and believe me, it's happening faster and faster and faster. What Olaf is speeding in increasing? The speed why is it going so fast? Georgia. Driven by large companies that have a lot of compute foul. Right. Lots and lots of computers with very fast processors, and and they're rolling these capabilities. These capabilities out to all all services. So if you picture yourself using Google Amazon Apple Facebook, all of the stuff that you're doing. We'll be permeated by by the way, I albums and they're driven by these very very fast computers. And so that's one the other one is that, you know, this is already well underway. This is already happening in a lot of corporations to improve efficiency and reduce costs right and much of that hasn't been made it to the public's attention because it's not sexy. It's not, you know, it's not that interesting, but it's already widely applied. You know, if you just use if you just think about your mobile phone, you know, that has a lot of the I in it that frankly follows you around every day every minute of the day. So it's already here. Now in also though could could go awry, I'm thinking of like. Robocop or something goes wrong is that possible? Well, you know, we hear a lot about these Terminator scenarios, right? I mean at the moment, we're still scientists tell us fifty to seventy years away from any such capabilities of creating super human brain. But you know, you could see autonomous systems going more autonomous than we want them to be right. So Thomas weapon systems, you know, not taking the right decisions or making more decisions than we want them to make without a human in the loop. You know, you could see unintended consequences. You know cars making the wrong decisions taking the wrong terms turns, you know, all of that could happen. That's not quite the same as a super super intelligence. But you know, it's certainly not system gone too far. Let very well. Could be yet. It is making an amazing difference in the way, we live, isn't it? It is. And you know, Georgia. I also like to look at this the other way round, which is you know, we also have a bit of a responsibility to pursue this because there is so much promise. I mean, if you just look at things like cancer care, you know, we we can augment our own human fallibility and and our own weaknesses that we bring to an with artificial intelligence to make our decisions more robust. You know, we can teach better to two kids different inclinations, right? They don't understand a math problem for one angle. Well, here they can help the teacher, you know, develop another angle make some recommendations in all of that stuff is incredibly valuable and foregoing. That would also not be a good thing for humanity. Right. So we have a lot of risk mitigate, but we also have responsibility to pursue some things here that are really valuable for us Sullivan's code tell me about the name of the book the title. How'd you picket? Yeah. You know Sullivan was a candy. He'll times who was on one hand known to be, you know, very smart guy. But also made some very tricky decisions ethical decisions that ended up costing him or your son, respectively, especially his state his country. And we're saying, you know, we're we're now in possession of pretty smart intelligent tools, but we gotta make the right decisions with them or will end up in a in a similar in a similar similar, misery. Right. And we we have on this thing. So that's what I came from now with artificial intelligence, where do you think it's going to go in the next twenty years, for example? Can you speculate out that far? Well, you know, it it obviously there's a lot to be said about that. But but I think there's a good. There's a good chance that we will all have a personal assistant that will help us. Make better decisions understand our environment better. Right. We talked earlier about getting getting some tools in our hands. They can help us that was approaching us digitally, right? And and safeguard us. And so I think that'll be part of personally, I, you know, artificial intelligence that will make us contextualize smarter. So, you know, helping us interpret what's happening around us when people reactive it's funny when we meet with them, right? And helping us understand why that might be the case helping us understand. I don't know financial markets better traffic patterns, so we get to work better and more more on time. You know, I think there'll be a lot of a lot of that happening. And you know, there's some there's pros and cons to that. Because he's personally, I will not just understand the environment better. But they also understand us in ways that you know, we might not even understand. And so there's a whole. Another kettle of fish there that we need to take a look at right because already Amazon and Facebook and others, by way of data footprint understand us. Very, well, they have a metal labeled and tag in ways. We're not even aware of and the will turbocharged that what about the possibilities of healthcare in a I what does it where does it stand now. Well, so you know, when you look at when you look at research wins this cancer research, you know, I will help us integrate many different data streams when you look at how different meds conflict with each other right within a treatment plan. I will be much better able to analyze that. And help us avoid a lot of mistakes malpractice as well. And so that's of course, huge recognizing patterns across populations what kinds of diseases so springing up where you know, who's getting them holiday, transmitting them and helping us get smarter about public health. All of those are really great horizons for four healthcare. There's a downside. There's some of this isn't there? Oh, absolutely. You know, they're so so obviously privacy is one of them. Do you want to be in control of your healthcare data, you make sure that this doesn't get into the hands of some people who will abuse it, whether it's for illicit purposes, or whether it's just for commercial purposes, like, you know, insurance plans, right? So you want to have control over that. So there's definitely that. And and and various security concerns hacking, obviously cybersecurity, and and and pull filing you, and and and thereby an avenue for making certain choices. So you know, what if my found out that I am, you know, whatever seventy five percent likely to incur, some kind of cancer and it. Stay away from that. I can help you with treatment plan, but you're gonna have to change your life drastically do. I still have the right to say. I don't want that. I wanna live my life the way I wanna live it. And if it shorter, that's my choice. Right. Wendy is tied to insurance company. That will say BUSTER, I'm sorry. But that means you're not gonna be covered. Right. So we gotta pay attention to choices as well. All if I may gift of up to ten robo calls a day from God knows who they are. And then when you call them back the phone won't go through. So you can't get to them who are these people. What are they? Yeah. You know, these are these are very very savvy. People domestically internationally that that are that get our phone numbers from somewhere and either, you know, scare us into something or or grab our attention and hold it. And and you know, that's I think the way the future is going to work. You know? Luckily, we're now seeing applications they're offered by AT and T and others that will help block some of those calls by there's there's. A whole new world of digitally blocking us and congesting us. It's really it's it's crazy. And they may not even have our specific number. They may be just dialing random numbers all the way through. Oh, yeah. Yeah. Definitely. I mean, so so, you know, we gotta pay attention to this. And I think you know, governments around the world are aware of this. And and telecom company, certainly right 'cause we're we're going to get very unhappy. So all of this means we're gonna have to up our defenses as well. Do you ever see military applications for a I will we have robotic soldiers one day? I think that's unavoidable. I think already we're developing though, is because you know, we don't wanna put American boys and girls at risk. In places. And you know, we think we can get you some kind of weapons asymmetries because we are more technical than some other hostile countries. And so, yeah, I think the intensive incentives. Go go in that direction. I will tell you that a lot of very senior military leaders like, you know, the one who wrote the forward for a book admirals, the readers are are very aware of the pitfalls and are advocating for human to always stay in. You know, there's some some very responsible bright people at the Pentagon who understand what the risks are directionally. We want more autonomous systems. That's that's where we're headed. What about China where are they with artificial intelligence? As a matter of fact, there was some kind of an announcement today that a Chinese a smartphone is dangerous, and it could hack into systems. It's crazy. Yeah. The Chinese are the Chinese are coming hard and fast. They're not quite as advanced as we are on the science side, you know, not as many technological breakthroughs. You know, they don't have quite the same universities and science establishment. We've got but they've got massive scale just think about one point four billion people all generating data. Right. And so and and they're they're they're fast. I mean, we we always think of ourselves Americans being very fast, and we are compared to the rest of the world. But boy, the Chinese are are order magnitude faster, even than we are. So so it's going to be just a matter of time. Now, we have a couple of years left, and we need to prepare for that. But you know, but that data and that speed will will eventually help them a lot where does Russia stand? You know, we we Russia's right? I mean, frankly, they have a fantastic establishment of mathematicians and engineers and scientists mostly dedicated to aerospace and defense industries and not really a big entrepreneurial culture there. You know, the entrepreneurs that can't find connections to big businesses over there here. So that's hurting them. And they're just now drafting a strategy. But I think, you know, Putin for whatever you might think about him. And you know, I disagree with him on a lot of things has gotten a right when he says those who control I will at the end of the day exercise, a large degree of control over how the world is run. Right. So they're they're definitely aware and they're coming. But there I think I would say they're they're definitely, you know, about it about five six seven years behind China. At least they say that China has been tried to hack into our utility systems testing it about shutting down and things like that. Do you? Do you believe that you know, I I believe that? You know, it's likely I don't have the evidence. You know, if I if I were China, I would probably be testing networks internationally as well. But yeah, we we I think it is making us painfully aware that we are. You know, we we are not quite as secure as we need to be now. That's it. You know, frankly, shutting down altogether is not really on anybody's interested in Ashley, right? If you think about it. Well, connected the financial flows and trade flows and everything so shutting us down well down large part of the infrastructure that even countries like China, depend on. So I'm not necessarily just open about it. But I definitely think we need to be vigilant where what's Olaf grow. We're talking about their books. Solomon's Cody carotid with Mark knits Berg, tell us a little bit about Mark. If you can. Yeah. Marseille marks a great great guy. My brother in arms here, and he is a an AIP HD educated Harvard who then took a detour being a serial social entrepreneur as old a couple of inches. And is now the executive director of the center for human compatible. I at UC Berkeley, and you know, and so we decided to write the book together, he's the scientists in the tech brain, and I'm sort of the strategies and high level, trends and futures guy. And and and so we decided to team up while it was a good team. That's for sure how many people..

China Mark knits Berg Georgia Solomon Facebook Olaf George Noory Sullivan Russia Google AT cancer Harvard Marseille UC Berkeley Apple personal assistant
"thomas weapons systems" Discussed on 710 WOR

710 WOR

12:52 min | 3 years ago

"thomas weapons systems" Discussed on 710 WOR

"And welcome back to coast to coast, George Noory with you along with Olaf growth who is the co author with market knits Berg of Solomon's code humanity. In the world of thinking machines and believe me, it's happening faster and faster and faster. What office speeding in in increasing the speed why going so fast? Plano, georgia. Driven by large companies that have a lot of compute power, right? Lots and lots of computers with very fast processors, and and they're rolling these capabilities capabilities out to all all services. So if you picture yourself using Google Amazon, Apple Facebook, all of this stuff that you doing. We'll be permeated by by the way, I algorithms and they're driven by these very very fast computers. And so that's one the other one is that, you know, this is already well underway. This is already happening in a lot of corporations to improve the fish and see, and but was cost right and much of that hasn't really made it to the public's attention because it's not sexy. It's not, you know, it's not that interesting, but it's already widely applied. You know, if you if you just use if you just think about your mobile phone, you know, that has a lot of AI in it that frankly follows you around every day every minute of the day. So it's already here. Now in also though could could go awry, I'm thinking of like RoboCop or something goes wrong. Is that possible? Well, you know, we hear a lot about these Terminator scenarios, right? I mean at the moment, we're still. Yeah. Scientists tell us fifty to seventy years away from any such capabilities of creating super human brain. But you know, you could see autonomous systems going more autonomous than we want them to be right. So Thomas weapon systems, you know, not taking the right decisions or making more decisions than we want them to make without a human in the loop. You know, you could see unintended consequences. You know, car is making the wrong decisions taking the wrong terms turns, you know, all of that could happen, and it's not quite the same as a super super intelligence. But you know, it's certainly an autonomous system gone too far. I'll let very well could be yet. It is making an amazing difference. In the way, we live, isn't it? It is and you know, Georgia. I I also like to look at this the other way around which is, you know, we also have a bit of a responsibility right to pursue this because there is so much promise. I mean, if you just look at things like cancer care, you know, we we can augment our own human fallibility and and our own weaknesses that we bring to analyses. Artificial intelligence to make our decisions more robust. We can teach better to two kids different inclinations. Right. They don't understand the math problem for one angle. Well, they can help the teacher, you know, develop another angle make some recommendations in all of that stuff is incredibly valuable and foregoing. That would also not be thing for humanity. Right. So we have yes, we have a lot of risks that we need to mitigate, but we also have responsibility to pursue some things here that are really valuable for us. Solomon's coach tell me about the name of the book the title. How'd you picket? Yeah. You know Sullivan was a can he'll times who was on one hand known to be, you know, very smart guy. But who also made some very tricky decisions ethical decisions that ended up costing him or sun, respectively, essentially, his state his country. And we're saying, you know, we're we're now in possession of pretty smart intelligent tools, but we gotta make the right decisions with them or will end up in a in a similar in a similar similar, misery. Right. And we we have to be on this thing. So that's where that name came from now with artificial intelligence, where do you think it's going to go in the next twenty years, for example? Can you speculate out that far? Well, you know, obviously there is there is a lot to be said about that. But but I think there's a good. There's a good chance that we will all have a personal assistant that will help us. Make better decisions understand our environment better. Right. We talked earlier about getting getting some tools in our hands. They can help us that dentist by who is approaching us digitally, right? And and safeguard us. And so I think that'll be part of of a personally, I, you know, artificial intelligence that will make us contextual smarter. So, you know, helping us interpret what's happening around us when people react a bit funny when we meet with them, right? Helping us understand why that might be the case helping us understand. I dunno financial markets better traffic patterns, so we get to work better and more more on time. You know, I think there'll be a lot of a lot of that happening. And you know, there's some there's there's pros and cons to that. Because these personally I systems will not just. Understand the environment better. But they also understand us in ways that you know, we might not even understand. And so there is a whole nother kettle of fish there that we need to take a look at right because already Amazon and Facebook and others, by way of data footprint understand us. Very, well, they have us meta labeled and tagged in ways that we're not even aware of and the will turbocharged at what about the possibilities of healthcare in a I what does it. Where's it stand now? Well, so you know, when you look at when you look at research wins this cancer research, you know, I will help us integrate many different data streams when you look at how different meds conflict with each other where within a treatment plan. I will be much better able to analyze that. And help us avoid a lot of mistakes malpractice as well. And so that's of course, huge. Recognizing patterns across populations. What kinds of diseases uh springing up where you know, who's getting them holiday, transmitting them and helping us get smarter about public health. All of those are really great horizons for for healthcare yet. There's a downside to some of. It says it there. Oh, absolutely. You know, they're so so obviously privacy is one of right? You want to be in control of your healthcare data? Sure. And you make sure that this doesn't get into the hands of people who will abuse it. Whether it's for, you know, illicit purposes, or whether it's just for commercial purposes, like, you know, insurance plans, right? So you want to have control over that. So there's definitely that. And and and various security concerns, you know, hacking, obviously. Cybersecurity. And and and profiling you and and and thereby an avenue for making certain choices. So you know, what if my I found out that I am, you know, whatever seventy five percent likely to incur, some kind of cancer and it. Stay away from that. I can help you with treatment plan, but you're gonna have to change your life drastically do. I still have the right to say. I don't want that. I wanna live my life the way I wanna live, and if it shorter that's my choice. Right. When the is tied to an insurance company that will say BUSTER, I'm sorry. But that means you're not going to be covered. Right. So we gotta pay attention to choices as well. All if I may get up to ten robo calls a day from God knows who they are. And then when you call them back he the phone won't go through. So you can't get to them who are these people. What are they? Yeah. You know, these are these are very very savvy. People domestically internationally that you know, that are that get our phone numbers from somewhere and either, you know, scare us into something. Or or, you know, grab our attention and hold it. And and you know, that's I think the way the future is going to work. You know? Luckily, we're now seeing their offered by AT and T and others that will help block some of those calls by. Yeah. There's a there's a whole new world of digitally blocking us and and congesting us. It's it's really it's it's crazy. And they may not even have our specific number. They may be just dialing random numbers all the way through. Oh, yeah. Yeah. Definitely. I mean, so so, you know, we gotta pay attention to this. And I think you know, governments around the world are aware of this. And and telecom company, certainly right 'cause we're we're gonna get very unhappy. So all of this means we're gonna have to end up our defenses as well. Do you ever see military applications for a I will we have robotic soldiers one day? I think that's unavoidable. I think already we're developing though, is because you know, we don't wanna put American boys and girls at risk. In safe places. And you know, we think we can get to some kind of, you know, weapons 'asymmetry because we are more taking naval than than some other hostile countries. And so, yeah, I think the intensive incentives. Go go in that direction. I I will tell you that a lot of very senior military leaders like, you know, the one who wrote the forward for a book, Admiral, the rita's are are very aware of the pitfalls and are advocating for human to always stay into look. So, you know, there are some some very responsible bright people at the Pentagon who understand what the risks are directionally. We want more autonomous systems. That's that's where we're headed. What about China where are they with artificial intelligence? As a matter of fact, it was some kind of an announcement today that the Chinese smartphone is dangerous, and it could hack into systems. It's crazy. Yeah. The Chinese are the Chinese are coming hard and fast. They're not quite as advanced as we are on the science side, you know, not as many scientific technological breakthroughs. You know, they don't have quite the same universities and signs establishment. We've got but they've got massive scale just think about one point four billion people all generating data. Right. And so and they're they're they're fast. I mean, we we always think of ourselves Americans being very fast, and we are compared to the rest of the world. But boy, the Chinese are are an order of magnitude faster, even than we are. And so so it's got to be just a matter of time. Now, we have a couple of years last night we need to prepare for that. But you know, but that data and that speed will will eventually help them a lot where does Russia stand? You know, we we Russia's 'nigma. Right. I mean, frankly, they have a fantastic establishment, mathematicians and engineers and scientists mostly dedicated to. Aerospace and defense industries and not really a big entrepreneurial culture there. You know, the entrepreneurs that can't find connections to big businesses over there come over here. So that's hurting them. And they're just now drafting a strategy. But I think, you know, Putin for whatever you might think about him. And you know, I disagree with him on a lot of things has gotten it. Right. When he says those who control a I will at the end of the day exercise, a large degree of control over how the world is run. Right. So they're they're definitely aware and they're coming. But there I think I would say they're definitely, you know, about it about five six seven years behind China. At least they say that China has been trying to hack into our utility systems testing it about shutting them down and things like that. Do you? Do you believe that? You know, I believe that you know, it's likely I don't have the evidence. You know, if I if I were China, I would probably be testing networks internationally as well. But yeah, we we I think it is making us painfully aware that we are. You know, we we are not quite as secure as we need to be now. That's it. You know, frankly, shutting us down altogether is not really at anybody's interested in Ashley, right? If you think about it. Well connected with financial flow is and trade flows and everything so shutting us down. Well, shut down a large part of the infrastructure that even countries like China, depend on. So I'm not necessarily dystopia about it. But I definitely think we need to be vigilant where what's Olaf grow. We're talking about their books. Solomon's Cody co wrote it with Mark knits Berg, tell us a little bit about Mark. If you can. Yeah. Mark marks a great great guy..

China Solomon georgia Facebook Amazon Mark knits Berg George Noory Plano Russia Google Olaf cancer AT personal assistant Sullivan Apple Ashley
"thomas weapons systems" Discussed on Newsradio 970 WFLA

Newsradio 970 WFLA

13:27 min | 3 years ago

"thomas weapons systems" Discussed on Newsradio 970 WFLA

"And welcome back to coast to coast, George Noory with you along with Olaf grow who is the co author with Mark knits Berg of Solomon's code humanity. In a world of thinking machines and believe me, it's happening faster and faster and faster. What Olaf is speeding in in increasing the speed why is it going so fast? Plano, georgia. A lot of this is driven by large companies that have a lot of compute power, right? Lots of computers with very fast processors, and and they're rolling these capabilities. These AI capabilities out to all our all services. So if you picture yourself using Google Amazon, Apple Facebook, all of the stuff that you doing. We'll be permeated by by the way, I algorithms and they're driven by these very very fast computers. And so that's one the other one is that, you know, this is already well underway. This is already happening in a lot of corporations to improve efficiency and reduce costs right and much of that hasn't really made it to the public's attention because it's not sexy. It's not, you know, it's not that interesting, but it's already widely applied. You know, if you if you just use if you just think about your mobile phone, you know, that has a lot of AI in that frankly follows you around. Every day every minute of the day. So it's already here. Now in also though could could go awry, I'm thinking of like RoboCop or something goes wrong is that possible. Well, you know, we hear a lot about these Terminator scenarios, right? I mean at the moment, we're still. Yeah. Scientists tell us fifty to seventy years away from any such capabilities of creating a super human brain. But you know, you could see autonomous systems going more autonomous than we want them to be right. So Thomas weapon systems, you know, not taking the right decisions or making more decisions than we want them to make without a human in the loop. You know, you could see unintended consequences. You know cars making the wrong decisions taking the wrong terms turns, you know, all of that could happen, and it's not quite the same as a super super intelligence. But you know, it certainly not system gone too far. I'll let very well. Could be yet. It is making an amazing difference. In the way, we live, isn't it? It is and you know, Georgia. I also like to look at this the other way around which is you know, we also have a bit of a responsibility right to pursue this. Because there is so much promise. I mean, if you just look at things like cancer care, you know, we we can augment our own human fallibility and and our own weaknesses that we bring to analyses. Artificial intelligence to make our decisions more robust. You know, we can teach better to two kids different inclinations. Right. They don't understand a math problem for one angle. Well, they can help the teacher, you know, develop another angle make some recommendations in all of that stuff is incredibly valuable and foregoing. That would also not be a good thing for humanity. Right. So we have yes, we have a lot of risks that we mitigate, but we also responsibility to pursue some things here that are really valuable for us. Solomon's co tell me about the name of the book the title. How'd you pick it? Yeah. You know Sullivan was can times who was on one hand known to be, you know, very smart guy. But who also made some very tricky decisions ethical decisions that ended up costing him or sun, respectively, essentially, his state his country. And we're saying, you know, we're we're now in possession of pretty smart intelligent tools, but we gotta make the right decisions with them or will end up in a in a similar in a similar similar, misery. Right. And we we have to be on this thing. So that's where that name came from now with artificial intelligence, where do you think it's going to go in the next twenty years, for example? Can you speculate out that far? Well, you know, obviously there is there is a lot to be said about that. But but I think you know, there's a good. There's a good chance that we will all have a personal assistant that will help us. Make better decisions understand our environment better. Right. We talked earlier about getting getting some tools in our hands that can help us identify who is approaching us digitally, right? And and safeguard us. And so I think that'll be part of of a personally, I, you know, artificial intelligence that will make us contextualize smarter. So, you know, helping us interpret what's happening around us when people react a bit funny when we meet with them, right? And helping us understand why that might be the case helping us understand. I don't know financial markets better. Traffic patterns that we get to work better and more more on time. You know, I think there'll be a lot of a lot of that happening. And you know, there's some there's there's pros and cons to that. Because these personally I systems will not just. Understand the environment better. But they also understand us in ways that you know, we might not even understand. And so there is a whole nother kettle of fish there that we need to take a look at right because already Amazon and Facebook and others, by way of data footprint understand us. Very, well, they have us metal labeled and tagged in ways, and we're not even aware of and the will turbocharged at what about the possibilities of healthcare in a I what is it. Where's it stand now? Well, so you know, when you look at when you look at research wins this cancer research, you know, I will help us integrate many different data streams when you look at how different meds conflict with each other right within a treatment plan. I will be much better able to analyze that. And help us avoid a lot of mistakes a lot of malpractice as well. And so that's of course, huge. Recognizing patterns across populations what kinds of diseases so springing up where you know, who's getting them holiday, transmitting them and helping us get smarter about public health. All of those are really great horizons for for healthcare yet. There's a downside to some of this isn't there? Oh, absolutely. So so obviously privacy is one of them. Right. You want to be in control of your healthcare data? Sure, you make sure that this doesn't get into the hands of people use it whether it's for illicit purposes, or whether it's just for commercial purposes, like, you know, insurance plans, right? So you want to have control over that. So there's definitely that. And and and various security concerns, you know, hacking, obviously cybersecurity, and and and profiling you and and and thereby inhibiting you for making certain choices. So you know, what if my I found out that I am, you know, whatever seventy five percent likely to incur, some kind of cancer and. Stay away from that. I can help you with treatment plan, but you're gonna have to change your life drastically do. I still have the right to say. I don't want that. I wanna live my life the way I wanna live, and if it shorter that's my choice. Right. When the is tied to an insurance company that will say BUSTER, I'm sorry. But that means you're not going to be covered. Right. So we gotta pay attention to choices as well. All if I may get up to ten robo calls a day from God knows who they are. And then when you call them back the phone won't go through. So you can't get to them who are these people. What are they? Yeah. You know, these are these are very very savvy. People domestically internationally that you know, that are that get off phone numbers from somewhere and either, you know, scare us into something. Or or, you know, grab our attention and hold it. And and you know, that's I think the way the future is going to work. You know? Luckily, we're now seeing appliques. Nations. They're offered by AT and T and others that will help block some of those calls by. Yeah. There's there's a whole new world of digitally blocking us and and congesting us. It's it's really it's it's crazy. And they may not even have our specific number. They may be just dialing random numbers all the way through. Oh, yeah. Yeah. Definitely. I mean, so so, you know, we gotta pay attention to this. And I think you know, governments around the world are aware of this. And and telecom company, certainly right 'cause we're we're gonna get very unhappy. So all of this means we're going to have to end up our defenses as well. Do you ever see military applications for will we have robotic soldiers one day? I think that's unavoidable. I think already were developing though, is because you know, we don't wanna put American boys and girls at risk. In safe places. And you know, we think we can get you some kind of, you know, weapons a symmetry because we are more taking naval than than some other hostile countries. And so, yeah, I think the intensive incentives. Go go in that direction. I I will tell you that a lot of very senior military leaders like, you know, the one who wrote the forward for a book, Admiral Doritos are are very aware of the pitfalls and are advocating for a human to always stay into the. So, you know, there are some some very responsible. Bright people at the Pentagon who understand what the risks are. But yeah, directionally we want more autonomous systems. That's that's where we're headed. What about China where are they with artificial intelligence? As a matter of fact, it was some kind of an announcement today that the Chinese smartphone is dangerous, and it could hack into systems crazy. Yeah. The Chinese are the Chinese are coming hard and fast. They're not quite as advanced as we are on the science side, not as many scientific technological breakthroughs. You know, they don't have quite the same universities and scientists advertisement we've got, but they've got massive scale just think about one point four billion people all generating data. Right. And so and and they're they're they're fast. I mean, we we always think of ourselves Americans being very fast, and we are compared to the rest of the world. But boy, the Chinese are are an order of magnitude faster, even than we are. And so so it's got to be just a matter of time. Now, we have a couple of years left, and we need to prepare for that. But you know, but that data and that speed will will eventually help them a lot where does Russia stand? You know, we we Russia's 'nigma. Right. I mean, frankly, they have a fantastic establishment, mathematicians and engineers and scientists mostly dedicated to. Aerospace and defense industries and not really a big entrepreneurial culture there. You know, the entrepreneurs that can't find connections to big businesses over there come over here. So that's all writing them. And they're just now drafting a strategy. But I think, you know, Putin for whatever you might think about him. And you know, I disagree with him on a lot of things has gotten a right when he says those who control a I will at the end of the day exercise, a large degree of control over how the world is run. Right. So they're they're definitely aware and they're coming. But there I think I would say they're they're definitely, you know, about about five six seven years behind China. At least they say that China has been trying to hack into our utility systems testing it about shutting them down and things like that. Do you? Do you believe that? You know, I I believe that you know, it's it's likely. I don't have the evidence. You know, if I if I were China, I would probably be testing networks internationally as well. But yeah, we I think it is making us painfully aware that we are. You know, we we are not quite as secure as we need to be now. That's it. You know, frankly, shutting down altogether is not really at anybody's interested in Ashley, right? If you think about it. Well, connected financial flows and trade flows and everything so shutting us down. Well, shut down a large part of the infrastructure that even countries like China, depend on. So I'm not necessarily just open about it. But I definitely think we need to be vigilant where what's Olaf grow toward talking about their books. Solomon's Cody co wrote it with Mark knits Berg, tell us a little bit about Mark. If you can. Yeah. Mark say marks a great great guy. My brother in arms here, and he is a. An AIP HD educated Harvard who took a detour being a serial social entrepreneur sold a couple of injuries, and is now the executive director of the center for human compatible AI at UC Berkeley. And you know, and so we decided to write the book together, he's the scientist and the tech brain, and I'm sort of the strategies and high level trends and future guy. And and and we decided to team up while it was a good team. That's for sure. How many.

scientist China Solomon Mark knits Berg Olaf georgia Facebook Amazon George Noory Plano Russia Google cancer AT Harvard Sullivan personal assistant
"thomas weapons systems" Discussed on 710 WOR

710 WOR

13:46 min | 3 years ago

"thomas weapons systems" Discussed on 710 WOR

"And welcome back to coast to coast, George Noory with you along with Olaf growth who is the co author with Mark knits Berg of Solomon's code humanity. In a world of thinking machines and believe me, it's happening faster and faster and faster. What Olaf is speeding in in increasing the speed why is it going so fast? Plano, georgia. A lot of this is driven by large companies that have a lot of compute power, right? Lots and lots of computers with very fast processors, and and they're rolling these capabilities, these I thought he's out to all all services. So if you picture yourself using Google Amazon, Apple Facebook, all of the stuff that you doing. We'll be permeated by by the way, I algorithms and they're driven by these very very fast computers. And so that's one the other one is that, you know, this is already well underway. This is already happening in a lot of corporations to improve efficiency and reduce costs right and much of that hasn't really made it to the public's attention because it's not sexy. It's not, you know, it's not that interesting, but it's already widely applied. You know, if you if you just use if you just think about your mobile phone, you know, that has a lot of AI in it that frankly follows you around. Every day every minute of the day. So it's already here. Now in also though could could go awry, I'm thinking of like RoboCop or something goes wrong is that possible. Well, you know, we hear a lot about these Terminator scenarios, right? I mean at the moment, we're still. Yeah. Scientists tell us fifty to seventy years away from any such capabilities creating a super human brain. But you know, you could see autonomous systems going more autonomous than we want them to be right. So Thomas weapon systems, you know, not taking the right decisions or making more decisions than we want them to make without a human in the loop. You know, you could see unintended consequences. Car is making the wrong decisions taking the wrong terms turns, you know, all of that could happen, and it's not quite the same as a super super intelligence. But you know, it certainly an autonomous system gone too far. I'll let very well. Could be yet. It is making an amazing difference. In the way, we live, isn't it? It is and you know, Georgia. I I also like to look at this the other way around which is, you know, we also have a bit of a responsibility right to pursue this because there is so much promise. I mean, if you just look at things like cancer care, you know, we can augment our own human fallibility and and our own weaknesses that we bring to analyses. Artificial intelligence to make our decisions more robust. We can teach better to kids different inclinations. Right. They don't understand the math problem for one angle. Well, they can help the teacher, you know, develop another angle make some recommendations in all of that stuff is incredibly valuable and foregoing. That would also not be a good thing for humanity. Right. So we have we have a lot of risks that we mitigate, but we also have responsibility to pursue some things here that are really valuable for us. Solomon's coach tell me about the name of the book the title. How'd you picket? Yeah. You know Sullivan was at times who was on one hand known to be, you know, very smart guy. But who also made some very tricky decisions ethical decisions that ended up costing him or son, respectively, essentially, his state his country. And we're saying, you know, we're we're now in possession of pretty smart intelligent tools, but we gotta make the right decisions with them or will end up in in a similar in a similar similar, misery. Right. And we we have to be on this thing. So that's where that name came from. Now with artificial intelligence, where do you think it's going to go in the next twenty years, for example? Can you speculate out that far? Well, you know, obviously there is there is a lot to be said about that. But but I think you know, there's a good. There's a good chance that we will all have a personal assistant that will help us. Make better decisions understand our environment better. Right. We talked earlier about getting getting some tools and hands. They can help us that dentist by who was approaching us digitally, right? And and safeguard us. And so I think that'll be part of of a personally, I, you know, artificial intelligence that will make us contextual smarter. So, you know, helping us interpret what's happening around us when people react to be funny when we meet with them, right? And helping us understand why that might be the case helping us understand. I dunno financial markets better traffic patterns, so we get to work better and more more on time. You know, I think they'll be a lot of a lot of that happening. And you know, there's some there's there's pros and cons to that. Because he's personally I systems will not just. Understand the environment better. But they also understand us in ways that you know, we might not even understand. And so there is a whole nother kettle of fish there that we need to take a look at right because already Amazon and Facebook and others, by way of data footprint understand us. Very, well, they have us metal labeled and tagged in ways that we're not even aware of and the will turbocharged that what about the possibilities of healthcare in a I what does it. Where's it stand now? Well, so you know, when you look at when you look at research wins this cancer research, you know, I will help us integrate many different data streams when you look at how different meds conflict with each other where within a treatment plan. I will be much better able to analyze that. And help us avoid a lot of mistakes a lot of malpractice as well. And so that's of course, huge. Recognizing patterns across populations. What kinds of diseases springing up where you know, who's getting them holiday, transmitting them and helping us get smarter about public health. All of those are really great horizons for for healthcare yet. There's a downside. There's some of it says there. Oh, absolutely. You know, they're so so obviously privacy is one of right? You want to be in control of your healthcare data? Sure. And you to make sure that this doesn't get into the hands of people who will abuse it. Whether it's for, you know, illicit purposes, or whether it's just for commercial purposes, like, you know, insurance plans, right? So you want to have control over that. So there's definitely that. And and and various security concerns, you know, hacking, obviously cybersecurity, and and and profiling you and and and thereby an evening you for making certain choices. So you know, what if my? I found out that I am, you know, whatever seventy five percent likely to incur some kind of cancer and it. Stay away from that. I can help you with treatment plan, but you're gonna have to change your life drastically do. I still have the right to say. I don't want that. I wanna live my life the way I wanna live, and if it shorter that's my choice. Right. When the tied to an insurance company that will say BUSTER, I'm sorry. But that means you're not going to be covered. Right. So we gotta pay attention to choices as well. All if I may get up to ten robo calls a day from God knows who they are. And then when you call them back the phone won't go through. So you can't get to them who are these people. What are they? Yeah. You know, these are these are very very savvy. People domestically internationally that you know, that are that get our phone numbers from somewhere and either, you know, scare us into something. Or or, you know, grab our attention and hold it. And and you know, that's I think the way the future is going to work. You know? Luckily, we're now seeing. Locations. They're offered by AT and T and others that will help block some of those calls by there's a there's a whole new world of digitally blocking us and and congesting us. It's it's really it's it's crazy. And they may not even have our specific number. They may be just dialing random numbers all the way through. Oh, yeah. Yeah. Definitely. I mean, so so, you know, we gotta pay attention to this. And I think you know, governments around the world are aware of this. And and telecom company, certainly right 'cause we're we're going to get very unhappy. So all of this means we're going to have to end up our defenses as well. Do you ever see military applications for a I will we have robotic soldiers one day? I think that's unavoidable. I think already were developing though, is because you know, we don't wanna put American boys and girls at risk. In places. And you know, we think we can get to some kind of, you know, weapons a symmetry because we are more taking naval than than some other hostile countries. And so, yeah, I think the intensive incentives. Go go in that direction. I I will tell you that a lot of very senior military leaders like, you know, the one who wrote the forward for a book, you know, admirals. The raiders are are very aware of the pitfalls and are advocating for human to always stay in the loop. So, you know, there are some some very responsible bright people at the Pentagon who understand what the risks are directionally. We want more autonomous systems. That's that's where we're headed. What about China where are they with artificial intelligence? As a matter of fact, there was some kind of an announcement today that the Chinese a smartphone is dangerous, and it could hack into systems. It's crazy. Yeah. The Chinese are the Chinese coming hard and fast. They're not quite as advanced as we are on the science side, you know, not as many scientific technological breakthroughs. You know, they don't have quite the same universities and signs establishment. We've got but they've got massive scale just think about one point four billion people all generating data. Right. And so and and they're they're they're fast. I mean, we we always think of ourselves Americans being very fast, and we are compared to the rest of the world. But boy, the Chinese are are an order of magnitude faster, even than we are. And so so it's got to be just a matter of time. Now, we have a couple of years left, and we need to prepare for that. But you know, but that data and that speed will will eventually help them a lot where does Russia stand? You know, we we Russia's 'nigma. Right. I mean, frankly, they have a fantastic establishment, mathematicians and engineers and scientists mostly dedicated to station. In defense industries, and not really a big entrepreneurial culture there. You know, the entrepreneurs that can't find connections to big businesses over there come over here. So that's hurting them. And they're just now drafting a strategy. But I think, you know, Putin for whatever you might think about him. And you know, I I disagree with him on a lot of things has gotten a right when he says those who control a will at the end of the day exercise, a large degree of control over how the world is run. Right. So they're they're definitely aware and they're coming. But there I think I would say they're they're definitely, you know, about it about five six seven years behind China. At least they say that China has been trying to hack into our utility systems testing it about shutting them down and things like that. Do you? Do you believe that? You know, I believe that you know, it's it's likely. I don't have the evidence. You know, if if I if I were China, I would probably be testing networks in a nationally as well. But yeah, we we I think it is making us painfully aware that we are, you know, we are not quite as secure as we need to be now. That's it. You know, frankly, shutting us down altogether is not really at anybody's interested in Ashley, right? If you think about it. Well connected with financial flows and trade flows and everything so shutting us down. Well, shut down a large part of the infrastructure that even countries like China, depend on. So I'm not necessarily just talking about it. But I definitely think we need to be vigilant where what's Olaf grow toward talking about their books. Solomon's Cody co wrote it with Mark knits Berg, tell us a little bit about Mark. If you can. Yeah. Mark marks a great great guy. My brother in arms here, and he is a an AIP HD educated Harvard who then took a detour being a serial social entrepreneur sold a couple of injuries, and is now the executive director of the center for human compatible. At UC, Berkeley. And you know, and so we decided to write the book together, he's the scientist and the tech brain, and I'm sort of the strategist and high level, trends and futures guy and and and we decided to team up while it was a good team. That's for sure. How many people? Are in the field right now who would agree with what you're saying tonight? Oh, that's a great question Georgia. You know, I would say that the majority of of of people in the I feel would say that we are..

China Solomon georgia Mark knits Berg scientist Olaf Facebook Amazon George Noory Russia Google Plano AT cancer UC Harvard raiders Sullivan
Should Robots Have License to Kill

60-Second Science

03:17 min | 3 years ago

Should Robots Have License to Kill

"This podcast is supported by linked in learning. We're all at different places in our careers. Some of us are just looking for a job. Others are trying to get promoted manage a team or do something new wherever you're at linked in learning has more than thirteen thousand courses taught by industry experts to help you succeed in your own way, anytime anywhere. It features. A vast range of business tack and creative skills. Employers are looking for visit linked in learning dot com slash learn for free to get a month free and to keep learning in all the career moments that matter to you. This is scientific Americans sixty seconds science. I'm Christopher Dodd Yata. Think killer robots what comes to mind? Maybe this guy. I'll be back. We all not to about seven Asia. We're talking about much simpler technologies that are best. If he is away in fact, many of which you can see under development today in every theater of the war. Toby Walsh, a professor of artificial intelligence at the university of New South Wales in Sydney, he spoke February fourteenth as part of a discussion called killer, robots technological legal and ethical challenges at a meeting of the American Association for the advancement of science. And so these are systems that are using sensors and software processing on their own to determine what constitutes a target and then applying lethal force to that without supervision or meaningful human control, another speaker, Peter Sarah co founder of the international committee for robot. Arms control has participated in UN talks on Thomas weapons. So what we've really been lobbying is not a sort of complete ban on some specific technical capability, but a more general requirement that all weapons systems should have meaningful human control over the targeting and engagement of those weapons systems. We're also quite fearful that these could constitute a new kind of weapon of mass destruction to the extent that a small group of people or individuals could launch large numbers of Thomas weapons systems that could have devastating effects on populations. Mary wear them coordinator of the campaign to stop killer. Robots at Human Rights, Watch cited the results of a new international poll on killer. Robots? The public is to coming more opposed to the prospect of a weapon system that would remove human control. Sixty one percent of the adults polled from twenty six countries. Oppose the idea of killer. Robots that's up from fifty six. Percent two years ago opposition with strong for both men and for women, although it was men who were more likely to favor these weapon systems compared to women and the story hasn't ended yet. We don't are. How will end we hope that it will end with an international treaty that provides guidance as the norm stigmatizing, the removal of human from the selection and attack process of weapons systems, or you know, the gets will open and we'll see proliferation destabilizing incidents and from from our experts is just a bad situation round of dont. Get this new national now. Thanks for listening for scientific American sixty seconds science. I'm Christopher Don. Yata?

Christopher Dodd Yata Thomas Toby Walsh Christopher Don University Of New South Wales UN Asia Sydney Peter Sarah Coordinator Co Founder American Association Professor Sixty Seconds Sixty One Percent Two Years
"thomas weapons systems" Discussed on Newsradio 970 WFLA

Newsradio 970 WFLA

08:50 min | 3 years ago

"thomas weapons systems" Discussed on Newsradio 970 WFLA

"Clocked two minutes to midnight. They say, well, we're gonna keep it there because of well nuclear weapons, of course, but global warming and also big news. And I shook my head. I'm gonna you kidding me. That's the best. They can do. Well, of course, Jerry Brown's on the board. So you know, we're going to hear about climate change climate changes everything. You know, it's social engineering geo engineering that are a danger. But I don't know if it's enough to keep the clock in two minutes to midnight can push the clock all the way to midnight pro-life care that is. Autonomous weapons and swarm intelligence to things that you know, once again, you know, if you're listening to this program tonight, you're way ahead of the group. You're way ahead of the class. It's like, you know, if they could give a college education in ground zero you're way ahead of the class. I mean, it's been a lot of speculation. We've even speculated about the dangers of AI, but it's primarily focused on how I will take your job. We've already talked about the robot rebellion. Okay. We've talked about how robots are going to replace some of you may even you know, maybe me, I am a robot. You're listening to ground zero. I don't know. Was he when you are a technocrat who has a clock that you wanna use for your political footballing? That's great. It's about as much worth as groundhog groundhog Punxsutawney Phil and groundhog day, which by the way is in today's. But you always had this discussion. Amongst tech leaders. Those were you know. Really interested in how to use this tech in government and journalists talk about artificial intelligence all the time. But here we are into the very very deep and dark secret of a Thomas weapons systems. And what could transpire if technology falls into the hands of a rogue state, a terrorist organization or a group on social media is being somehow manipulated by swarm intelligence or would it. Call new intelligent. There's a lot of debates going on about the moral and legal implications about Thomas weapons, there should be more debate on whether or not they could use swarming intelligence to go beyond the creative field 'cause they're saying, oh, it's so great. You put people in groups that have the collective high minded collected high mind does this and this and this and this, but what they don't talk about is how can enable groups to also plot terrorist attacks. And we have a number of reasons to actually kick back and say, wait a minute. Why are we ignoring? Why why is the bulge of atomic scientists ignoring this type of technology, but they're saying that the dangers of the world now are being normalized, and they're saying that climate change nuclear war is normalized, and so that's why they kept the clock. Two minutes to midnight. And they were saying also that this is the time where we're probably in the darkest days some somewhat similar to the darkest days of the Cold War. And I wondered about that. I thought about that. I pondered about that. And I thought to myself. Well, okay. There are exits existential threats that they're not talking about that are certainly threats that may be the dangers are not going to be said. Like, I say anything about it. Because the technocrats just like they did with the bomb. They dropped the bomb on Hiroshima, and they dropped the bomb on Nagasaki. And then created the clock to warn people. Oh, we don't wanna go there. Again. Do we? And they don't want to remind people. There are determining factors here of whether or not we're gonna be seeing autonomous robots or even people who are interfaced with robots being the ones to push the button. When the signal comes to push the button. Oh, we can rely on robots to think we won't have ourselves. A another doomsday close doomsday scenario like we did back on January twenty fifth of nineteen ninety-five where they almost were Russia almost fired on the United States because of some tests that was going on near the Arctic circle. So they're saying, well, I think that would be more reliable to have. I run the show. But you know, we need to up the power. We need to be able to say, okay, we give up. We want five G. We give up we want you to want you to fry our brains because our brains are going to be interfacing with computers anyway. So let's just took it. When you're talking about a doomsday clock. I think you should include a lot of things, and I think when technocrats decide they're gonna push the issues of what will warming and fake news. They ignore other things generational cycles that influenced human activity. We're in a time of excitability. We're at time of hysteria. We need to consider that there's a lack of unity amongst the bass swarm intelligence in a in a time where there's a lack of unity. We'll have a lot of people. Plotting to carry out all kinds of horrible things using swarm intelligence. History has shown that you know. When we when we sit here, and we don't look at the generational. You know patterns when we don't see the patterns we end up either destroying ourselves we end up making friendships with the groups that become dangerous. Blowback stew us. I mean, we've been exposed to mass riots against globalist tyranny. The media's ignoring it on purpose because they don't want to give us any ideas about how a greater enemy out there. Besides you know ourselves and so at a time. When the course of civilization seems to be more uncertain than ever. As we're being driven towards pessimism. And even despair. A turning point is is in the making a turning point John bar hinge, or whatever you wanna call it needs to be considered. We're going to be going down a road that. Is either the road lot less traveled? The point of least resistance, or at least we're going to be we we could very well be on a road. We've never even seen before. Louis never traveled down before. So we see the increase of technology and how we see in recent decades. It's actually shorten our time. To be social with one another even our families. We even families text another in the same house. And we were all captivated and controlled. And we're having those seeds of doubt in those seeds being an adversary. That's all being planted into the algorithm scale. And the Chinese have taken some of this information, and they've twisted it, and he created a self destruct algorithm, where they push it into something called swarm intelligence, where the mind the mind or whatever you wanna call it can plot to be the ones. To carry out these these these acts of terror these acts of murder, and and not only that sometimes they're under the control of the mind control, and they can't do anything. Because if they do something that isn't what the computer wants or what the government wants or whomever the controller is then they ended up getting taken out by a drone and autonomous drone. That doesn't doesn't question orders. Everything done by remote control, everything done by remote control. And this raises new concerns about who are what will be capable of pushing the button that would initiate nuclear war. The kill switch either. It'd be a human. Who's interface with artificial intelligence? Oral B by artificial intelligence. It sounds something like a sky net that we see in the Terminator films where one day the robots all rise up when they say we're going to kill you. We don't want you anymore. And they basically say we need to wipe out the humans because they are a pain in our sides. So I just think that it's a Hello. Yeah. Low producers talking to me, are you? Okay. Everything. Okay. Oh, I was just I saw something. And I and I I'm sorry Bobo in Los Angeles wonder why when I tell you I went quiet. I think I think. I think Sam is calling us Ron. Ron are you there is Sam calling us is that v Sam. Okay. Bobo? Can we go to break? I that's why I stopped because I don't you know, who Sam is Bobo the guy the guy that got the info guy.

Sam Bobo Jerry Brown Ron Hiroshima Russia Nagasaki Thomas Phil Los Angeles
"thomas weapons systems" Discussed on News Radio 810 WGY

News Radio 810 WGY

04:15 min | 3 years ago

"thomas weapons systems" Discussed on News Radio 810 WGY

"Triple easy seven three three seven hundred seventy three thirty seven hundred. So the doomsday clock two minutes. They say, well, we're gonna keep it there because of well nuclear weapons, of course, but global warming and also fake news, and I should head. I'm gonna you kidding me. That's the best. They can do. Well, of course, Jerry Brown's on the board. So you know, we're going to hear about climate change. You hear about climate changes everything? You know, it's social engineering geo engineering that are a danger. But I don't know if it's to keep the clock in two minutes to midnight puts the clock all the way to midnight for all I care and that is. Optimus weapons and swarm intelligence to things that, you know, once again, you know, if you're listening to this program tonight, you're way ahead of the group. You're way ahead of the class. It's like, you know, if they could give a college education in ground zero you're way ahead of the class. I mean, there's been a lot of speculation. We've even speculated about the dangers of AI. But you know, it's primarily focused on how AI. We'll take your job. We've already talked about the robot rebellion. Okay. We've talked about how robots going to replace some of you may even you know, maybe me, I am a robot. You're listening to ground zero. I don't know. Was he when you are a technocrat who has a clock that you want to use for your political footballing? That's great. It's about as much worth as groundhog as a groundhog Punxsutawney Phil and groundhog day, which by the way is in two days. But you always had this discussion. Amongst tech leaders. Those were you know. Really interested in how to use this tech in government and journalists talk about artificial intelligence all the time. But here we are into the very very deep dark secret of Thomas weapon systems. And what could transpire if technology falls into the hands of a rogue state, a terrorist organization or a group on social media is being somehow manipulated by swarm intelligence or call new intelligence. There's a lot of debates going on about the moral and legal implications of autonomous weapons there should be more debate on whether or not they can use swarming touches new intelligence to go beyond the creative field 'cause they're saying, oh, it's so great. You put people in groups that have the collective high minded collective high mind does this and this and this and this, but what they don't talk about is how it can enable groups to also plot terrorist attacks. And then we have a number of reasons to actually kick back and say, wait a minute. Why are we ignoring why why why is the bulletin of atomic scientists ignoring this type of technology, but they're saying that the the dangerous world now are being normalized, and they're saying that climate change nuclear war is normalized, and so that's why he kept the clock or two minutes to midnight. And they were saying also this is the time where we're probably in the darkest days some somewhat similar to the dark days of the Cold War. And I wondered about that. I thought about that. I pondered about that. And I thought to myself. Well, okay. There are the existential threats that they're not talking about that are certainly threats that may be the dangers are not going to be said anything about it. Because the technocrats just like they do with the bomb. They dropped the bomb on Hiroshima, and they dropped Obama Nagasaki. And then they created the clock to warn people. Oh, we don't want to go there. Again. Do we? And they don't want to remind people. Bit. There are determining factors here. Whether or not we're gonna be seeing autonomous robots or even people who are interfaced with robots being the wants to push the button. When the signal comes to push the button. All we can rely on robots to think we won't have ourselves. Another doomsday close doomsday scenario. We did back on January twenty fifth nineteen ninety-five where they almost were Russia almost fired on the United States because of some tests that was going on near the Arctic circle. So they're saying, well, I think that would be more reliable to have. Run the show. But you know, we need out the power..

Russia Phil Jerry Brown Optimus Hiroshima Thomas Nagasaki United States Obama two minutes twenty fifth two days
"thomas weapons systems" Discussed on Future Tense

Future Tense

03:11 min | 3 years ago

"thomas weapons systems" Discussed on Future Tense

"Conference in the US capitol organized by the Washington Post, the focus is on defense and Morton. Just to drill down. When you think about the future talk a little bit about how you see a transforming your business of military. Our and the guy in the hotseat is general Joe it chairman of the US joint chiefs of staff in our profession one of the areas that's going to really determine future outcomes is speed of decision making. So a is certainly relevant to speed of decision-making. If you think about cyberspace, a is critical to be able to implement effective ways of protecting ourselves in cyberspace. I don't think it would be an overstatement when we talk about artificial intelligence to say that whoever has a competitive advantage in artificial intelligence and can field systems informed by artificial intelligence could very well have an overall competitive advantage. I mean, I think it may be that important. I don't think it's something we can say definitively at this point. But it certainly going to inform. Warm in be the preponderance of the of the variables that would go into. Hey, who has an overall competitive advantage? AI will be a key piece of it. Anthony Fennell here. Welcome to future tents. A group of the world's leading scientists and tech experts, including physicist, Stephen hawking, an apple co founder, Steve Wozniak have issued a stark warning in open letter published today, they say or Thomas weapons systems was use artificial intelligence to select targets without human intervention should be banned predicting that was more than three years ago. And since that time the development of autonomous weaponry has continued pace when he begin to think about what a world would look like where militaries have deployed Atanas weapons in in large numbers. One of the dramatic changes that we're likely to see is the pace of battle exceleron pull Shari's. Our first guest today as we look at the influences speed on future conflict. He's the director of technology and national security at the center for new American secure. Thirty and then one of the drivers of military's pursuing. This technology is fear that others are doing. So and they'll have to do this just to keep pace. This was well captured I'm gonna paraphrase here, but a coat from former deputy secretary defense Bob work who said others build terminators, and they don't make good decisions as people, but they're faster. How do we respond? That's kind of colorful and slightly scary way. To look at the problem, which is that even if a ton of JV don't have all of the reasoning capabilities that humans hat in a variety of different contexts, maybe they don't understand ethical principles the same way, these are vitally important things. But if they're faster that pressure will loan drive lotteries to use this technology, but could also shift warfare to new domain.

US Washington Post AI Joe Thomas weapons systems Anthony Fennell Morton Steve Wozniak Stephen hawking director of technology deputy secretary Shari chairman apple physicist Bob work co founder three years