Dr. Stephen Larson and Tim Shi join the regular panelists to discuss the race to Artificial General Intelligence (AGI) and its potential implications.
Brought to you by Fling: Urban Drone Delivery. Get it fast. Fling it!
Opening clip from Star Trek: The Next Generation episode “A Measure of a Man“.
Co-host Daniel Valenzuela wrote an article on Medium summarizing this episode: Roadmap to Artificial General Intelligence
Welcome to Let’s Make The Future.
Our topic this week is: Artificial General Intelligence
In this episode, in addition to our regular panelists, we welcome two guests: Stephen Larson and Tim Shi.
Tim Shi is a Stanford University Computer Science student and founder of moxel.ai, a machine learning social aggregation platform. NLP research. General-purpose reinforcement learning. Operate on the website
Dr. Stephen Larson is CEO of MetaCell, a bioinformatics software services company. He is a graduate of MIT in Computer Science and received a Ph.D. in Neuroscience from UC San Diego. He is also Co-Founder and Project Director of OpenWorm, whose mission is to simulate the body and neural network of the nematode C. elegans in a computer.
Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and future studies.
In my opinion, the greatest feat humanity has yet achieved is landing humans on the moon, in 1969. It was impressive on both a technical level and as a symbol of humanity stepping beyond the cradle. However, less than 100 years later, it seems likely that humans will best both the science and symbolism of this feat, as we achieve creation itself, and create a mind in our own image.
- Tim, please tell us about your project.
- Stephen, please tell us about how OpenWorm might contribute to humanity’s achievement of Artificial General Intelligence.
- Is there any role for biology anymore
- The collection of vast amounts of data is vital to progress toward specific AI goals (for instance, self-driving cars, image classification). Does this mean the algorithms are not important?
- Has Google built up an insurmountable advantage (personnel, data, technology) in the race to AGI?
- Is the cutting-edge research towards the goal of AGI taking place in universities or in extremely well-funded company research departments?
- What is the difference between Machine Learning and Artificial Intelligence
- (Broader) Will the development of AGI cause power to be more concentrated or decentralized in human society?
Intelligent agents can protect people
AGI could create soldiers, track citizens on a massive scale
- It is a recurring theme of fiction, including Blade Runner, whose sequel I saw last night, that humans, in programming their best qualities into their machines, end up in a world where the machines are more human than their masters.
hard to understand the brain
impossible to reverse engineer
optimize neural networks
stephen – classic dichotomy
proof is in the pudding
academia into industry
moral obligation to give agency to AIs
learn from weak and noisy signals
embodied artificial intelligence
interaction with an environment is critical
how much work has been done to improve
new activities that haven’t been done before?
creativity divorced from agency
intelligence divorced from agency requires
trend towards more organization, not less
disruptive periods where there is more decentralization
political / economic dimensions
who benefits from AGI?
economics: elites with personal AI, or does everyone?
political: how much autonomy should AIs have
control problem with top AIs
human history: arrival of homo sapiens removed Neanderthal
Episode Machine Transcript (unedited and uncorrected)
Forgive me Commander is a curiosity.
I wonder even but I was.
We be judged by how we treat that race. Come on that want to date.
Understand what is the archery if you are you sure you meant to go for it right here for centuries so I want to meet the budget conscious person even the smallest degree what is the matter I don’t know fear.
Well that’s the question.
Will come to let’s make the future a discussion about future trends technologies and their implications for human society we are coming to you from all over the world featuring the voices of Danielle Allen’s whale Michael.
And Michael Carey music and editing Christian Pelton in this episode Peter trend discussion topic artificial general intelligence with Dr Steven Larson and.
Brought to you by filling dot.
Get a fast.
Welcome to let’s make the future Michael Carey Our topic this week is artificial general intelligence so perhaps first our regular panelist could introduce themselves.
So the catatonia OK And in addition to our regular panelists we welcome two guests Steven Larson and Tim She Tim She is a Stanford University computer science student and founder of mock soul dot AI a machine learning social aggregation platform while Dr Steven Larson is C.E.O. of medicine Well a bio informatics software services company he’s a graduate of MIT in computer science and received a Ph D. in neuroscience from U.C. San Diego He’s also co-founder and project director of a project very close to my heart open worm whose mission is to simulate the body and neural network of The New Matilda C. elegans in a computer so perhaps Tim you could favor us with a bit more detail what is your background as it relates to artificial intelligence don’t hear you Tim in case you’re He I think you’re muted right now connecting audio OK no problem Tim still connecting No problem I know this is always a pain with these kinds of things to get all the details right with the audio connections maybe we should have a podcast about that before we can develop artificial intelligence maybe we need to actually be able to talk to one another or have a question hopefully if like we don’t even get like simple things to work already if I have been on the market for years what does I think if we have I can some point like good artificial general intelligence or not I feel like that’s all slightly interest although you see it actually a test like you know when they like driving on the streets and I was thinking about fatal accidents caused by an almost drive Biko I would have much more impact actually on like regulations and impact on the company and the development of the homes vehicles so maybe that’s just something everybody deals with as we are doing dealing with it right now yeah maybe just like how any idiot can have a child yeah maybe humans can. Make an artificial intelligence despite our own idiocy and we’ll just leave the AI to solve all our problems you know let the AI go to difficult school and you know do right by its parents and give us the wonderful retirement that we all deserve I think we should wait for Tim here maybe we can have a we can start having a conversation though and him can connect and then we’ll connect about his background here because I first actually like to ask Stephen a question about open worm but first maybe I should do the introduction to our topic so I wrote up a little thing here partly Cripps from Wikipedia artificial general intelligence is the intelligence of a machine that could successfully perform any intellectual task that a human being can it is a primary goal of some artificial intelligence research and it’s a common topic in science fiction and future studies and I think that’s Tim hello oh oh yes we can hear you who hates it just because I had to pick awesome Yeah that tends to fix things just reboot Yeah yeah so actually maybe we can just jump back for a sec because Tim I gave a short introduction about you as the founder of Marshall the AI but perhaps you could give a slightly more elaborate introduction of yourself and how you came into the field of AI Yeah sure so apparently I’m a student at Stanford and I thing durians research and yeah I laughed at first yeah and more during like an alkie research the past year or been working a project trying to feel like a general purpose reversal on an agent for the lab so the idea is like you can go to any website and use that I think we’re covering for a small army and tried to put it in tech It operates on the website so you can do Center tasks like looking at flights or using currency change or are just searching for a restaurant they want to go to so gradually want to automate efforts to greet you on the Internet and recently I mean what kind of project.
What happened in that what it was going to.
Kill so you can tell them to play piano you can lead a lengthy defense that you know I cook already and that for me would be the future OK guy and also a case that given the fact that a lot of people the way high going to do it all right now people because.
It was that and the part of different people doing different and electability like in the future will I’m kind of thinking that there might be a need for some infrastructure for its form of open a high gate our A.G.I. data machine that people deface the book and do all the good the way you have that happening where you can feel like that machine and you see the whole of the vaccine and be so some hope of a collaboration I don’t want to hype but right now is that if you know that you know I am one thing that I don’t know how useful that I think I’m still creating difficult for people to tap into.
But the so the question is what kind of infrastructure do you think and you can go to that even though. Do you think it would need to create because of lead in from. I think Tim would definitely have a great response to this one may be a great chance for you to talk about Mark so yeah I think traditionally because private companies own the data that a lot of the models of where things are like gardens a private company so if you look at speech recognition or up to protection Jeff want Google or Microsoft or other big companies have the best A.P.I. Yes but I think like if we were allowed to benefit like the public and not just controlled by like a few private companies really likes a sort of equivalent of open source movement in the show me that we really we need to open up the data that you’ve acquired to train this general purpose if you guys are general purpose intelligent now great times that’s probably one of the reasons why I started MA so is to help that we can go to community researchers and developers to share data and model train from those data so that we kids democratized access to our cities and people can freely use those. Applications and enjoy this and high accuracy benefit of those models that’s fantastic So the final question for us takes us in a broader direction we’ve been talking about very specific problems getting to a G.I. and then the specific issues relating to the development of a G.I. Now what about the implications of this technology to human society in general and I wonder if we do achieve this goal let’s say in our lifetimes will the development of A.G.I. cause power to be more concentrated or more decentralized in human society we can imagine A.G.I. could lead to intelligent agents that act is the kind of personal Jarvis personal assistant that could help defend you against the onslaught of spammers and all kinds of other attempts to defraud you or other issues help to guide you through a more complicated world on the other hand A.G.I. could also help governments to track citizens on a massive scale. It’s not quite a G.I. that would be doing that but just the general techniques of artificial intelligence would help them to do more and more advanced pattern recognition for the purposes of tracking people it could also lead to the creation of super soldiers that have agency in the very same reasoning abilities as human soldiers so we can kind of see this going in two different directions towards the direction of power becoming more concentrated and in the direction of power becoming less concentrated So what kind of world human society do you envision as a consequence of the development of rights a bit biased and I think that if you look at the sort of history of evolution complex adaptive systems you often see a trend towards more organization rather than last however that path isn’t a straight line so you may go through disruptive periods where there is more decentralization I think that we see on society as a tension between these two things so the introduction of a disruptive new innovation or new technology like A.G.I. it’s I think a chaotic event poor human history I think it’s one that causes realignments that it’s not clear how to project which direction I’ll take I would say though without giving a straight answer that we all have to contemplate this along a few other dimensions than we commonly think about and I think the two to come to mind are political and economic So on the economic side I think we have to think about who benefits from A.G.I. and what is the value and the need that is created by it does every individual need a personal a I to do something for them that they’re willing to pay money for or is it more that alleviates prefer to have an end to provide the service or leads I guess that sort of economics on political side I guess I’d say how much autonomy do we think A.G.I. is are going to have how much autonomy do we think we want to allow them science fiction has many different versions of this ranging from complete three walking essentially another species living alongside us and others have more of a slave race underclass. Kind of a perspective and then others have you know perspective that they are limited in having any off control or free will and they’re just basically directly serving us so it’s unclear which versions of those things we’re going to have and it’s possible that as A.G.I. plays out we will go through areas where all of those things occur and we go back and forth so I think when Want to Madden’s the impact of a G.I. one may have to think about that it’s not going to be a single end state that it would be a continually evolving technology that would play out over quite a lot of time as it interacts with the interests of human beings as well as the interests of elites within the human society Yeah that’s a great point I think or the question of whether the human or it’s going to be organized more centrally organized decentralized way I’m less worried about like the way that he was that is can be organized but more about the relationship between machines and humanist because imagine a scenario where you truly have intelligent machines then having control of those machines is very crucial point and and live a worry because we don’t have any research or approach that can currently guarantee that we could abit control of those sites and chase Meeker couldn’t predict the future but if you look at the human history like that arrival of homo sapiens something changed in our genes that made our so that if it different from our same place and then that change actually made our population grow much faster than the other species on the planet and eventually a human take over the planet kind of scenario and if we have intelligent machines that are more intelligent than human then will we keep that missions as seven of like servants or pets or the other way around so that’s the point that I want to like not sure about yeah I think Tim You did a great job of addressing the scenario that Stephen didn’t mention which is the scenario where a Eyes are the top dog not that the eyes are slaves and not that a eyes are free citizens like we tended to in. Data but instead where they’re the overlords and how do we control them and certainly that’s another scenario for sure now which one will come to pass I guess Tim you’re going with human history here you’re thinking that that’s going to be the case I’m hearing and Stephen is a true pundit hedging his bets there I pray to conclude this.
OK fair enough I do think this is a fascinating question as far as what happens here I suppose in the best case scenario the eyes do end up with supremacy but they leave us to our own devices and then we can decide how centralized authority centralized we want our society to be after they’ve handled all of our scarcity issues for us and then we’ll say oh yeah sure I’ll be one thing I guess if not to heads about I would weigh in on the side that we don’t have to worry about overlords taking over for the time being just because one has to imagine that AI has to survive in the same natural world that human beings and all of the animal kingdom is and right now the instantiation of AI is still in machines and those machines have physical limitations like they wear out they need electricity they don’t reproduce themselves and these are things that nobody is proposing yet that we build into AI So at least for the time being in the foreseeable future unless we make AI reproduce itself unless we build computers that have infinite amounts of energy and never require more energy to put in less we teach AI to gather its own energy from the environment and once we teach it to repair itself and rebuild itself I think that we still will have a fair fight in it I think a lot of the science fiction cases where we imagine that AI’s have taken over from human beings tend to either conveniently leave that out or don’t really explain how it is that the machines are able to solve these basic problems of sort of resource gathering even the matrix I think had a pretty poor physics explanation that it sort of uses human beings as an energy source which I don’t think is. Fully practical and the deeper explanation so until we see that coming out I think that we’re still safer than so but safe I mean you just talked about damage done on intention Meaning for example by over roads but what do you think about damage being done not intentional I think might be a very natural thing in very many scenarios but what’s your opinion on that when you say breakouts what you mean by breakouts I mean for example AI being programmed to do or trying to do certain grow and in order to achieve that goal it will start doing unpredicted behavior that might be damaging whatever sense to better achieve that goal Well I guess what I’m saying here is that the mechanism through which I currently can affect the world are mechanisms that humans built and so while you’re right maybe as Skynet scenario where AI launches all the nuclear missiles that we have built that’s probably something that we would want to worry about on the other hand the fact that we are the ones that have built all the things that AI is going to manipulate gives us I think a fighting chance so we can choose to not give full control of our nuclear arsenal to AI or we can choose to not build our F. sixteen so they can be flown and have missiles launched by systems we can build in failsafes that only humans can unlock and we can sort of build back doors into things that kind of thing so until the machines start building themselves I guess what I mean is that yes there’s a chance for bad stuff to happen but I see it as bounded by essentially our own by human agency and human you know stupidity if you will for the time being right I think the lesson of Stephen as well as Blade Runner twenty forty nine which I just saw last night is for the love of God don’t let the replicants reproduce that’s the key right so tell I’ll just give you the last word and then we’ll finish the conversation in case you had something final to say well I would say that we should definitely be optimistic about the research in general because it’s really kind of beneficial to just Saturday overall if it. Look at the curves like a chance of actually give us an arrow even if there is just like to be point zero zero one percent of to you know where we shouldn’t forget about that next March a small probability to enter scenario and we should prepare for the worst agreed All right so just before we end the conversation I did want to give both Steve an intimate chance to tell our listeners how they can find out more about them respectively so Stephen maybe you could give a link to your works sir you can find out more about the project to build a virtual organism in a computer and a simulated nervous system at open worm dot org And you can find out more about my software company at Med sell dot us untasted and Tim How can our listeners find out more about you because you’re finding more about my research at Jim She checks I say and also you can look at like the cool demos of models on a lot so I go to Mark so I would love to hear your feedback on this project I love both of your creative uses of the new top level domains that have become available over the last few years so anyway yeah thanks so much guys this was a fantastic conversation I wish you could go on forever but I really had a fantastic time so thank you to Steven Larson thank you Tim She thank you to our regular panelists Daniel and Michael as well and we’ll see you next time thank you so much brighter one was a lot of fun thanks.
Let’s make the future these it asan line at let’s make this.
artificial intelligence (14) private company (6) neural network (8) artificial general intelligence (6) machine learning (7) human brain (9) general intelligence (11) A.G.I. (13) narrow scope tasks (2) general purpose (4) human beings (7) regular panelist (3) G.I. (4) human society (5) science fiction (4) open worm (4) intelligent machines (3) biology side (3) human history (3) computer science (3) solve problems (3) driving cars (2) missing data (2) intellectual task (2) great point (2) specific problems (2) deep learning (2) common topic (2) primary goal (2) data issue (2) pharmaceutical companies (2) industry side (2) large companies (2) supervised learning (2)