LMTF Podcast Episode 15: Artificial General Intelligence


Dr. Stephen Larson and Tim Shi join the regular panelists to discuss the race to Artificial General Intelligence (AGI) and its potential implications.

Brought to you by Fling: Urban Drone Delivery. Get it fast. Fling it!

Opening clip from Star Trek: The Next Generation episode “A Measure of a Man“.

Medium Article

Co-host Daniel Valenzuela wrote an article on Medium summarizing this episode: Roadmap to Artificial General Intelligence

En Español: El camino hacia la Inteligencia Artificial General


Welcome to Let’s Make The Future.

Our topic this week is: Artificial General Intelligence

In this episode, in addition to our regular panelists, we welcome two guests: Stephen Larson and Tim Shi.

Tim Shi is a Stanford University Computer Science student and founder of moxel.ai, a machine learning social aggregation platform.  NLP research.  General-purpose reinforcement learning.  Operate on the website

Links: TimShi.xyz and moxel.ai

Dr. Stephen Larson is CEO of MetaCell, a bioinformatics software services company.  He is a graduate of MIT in Computer Science and received a Ph.D. in Neuroscience from UC San Diego.  He is also Co-Founder and Project Director of OpenWorm, whose mission is to simulate the body and neural network of the nematode C. elegans in a computer.

Links: OpenWorm.org and MetaCell.us


Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and future studies.


In my opinion, the greatest feat humanity has yet achieved is landing humans on the moon, in 1969.  It was impressive on both a technical level and as a symbol of humanity stepping beyond the cradle.  However, less than 100 years later, it seems likely that humans will best both the science and symbolism of this feat, as we achieve creation itself, and create a mind in our own image.

My questions:

  • Tim, please tell us about your project.
  • Stephen, please tell us about how OpenWorm might contribute to humanity’s achievement of Artificial General Intelligence.
  • Is there any role for biology anymore
  • The collection of vast amounts of data is vital to progress toward specific AI goals (for instance, self-driving cars, image classification). Does this mean the algorithms are not important?
  • Has Google built up an insurmountable advantage (personnel, data, technology) in the race to AGI?
  • Is the cutting-edge research towards the goal of AGI taking place in universities or in extremely well-funded company research departments?
  • What is the difference between Machine Learning and Artificial Intelligence
  • (Broader) Will the development of AGI cause power to be more concentrated or decentralized in human society?


Intelligent agents can protect people

AGI could create soldiers, track citizens on a massive scale

  • It is a recurring theme of fiction, including Blade Runner, whose sequel I saw last night, that humans, in programming their best qualities into their machines, end up in a world where the machines are more human than their masters.
    • Morality



hard to understand the brain

impossible to reverse engineer

optimize neural networks

stephen – classic dichotomy

proof is in the pudding


revolving door

academia into industry



moral obligation to give agency to AIs


supervised learning

learn from weak and noisy signals

embodied artificial intelligence

interaction with an environment is critical



how much work has been done to improve

new activities that haven’t been done before?


creativity divorced from agency


intelligence divorced from agency requires


trend towards more organization, not less

disruptive periods where there is more decentralization

chaotic event

political / economic dimensions

who benefits from AGI?


economics: elites with personal AI, or does everyone?

political: how much autonomy should AIs have

control problem with top AIs

human history: arrival of homo sapiens removed Neanderthal

Episode Machine Transcript (unedited and uncorrected)

The future.

Forgive me Commander is a curiosity.

I wonder even but I was.


A race.

We be judged by how we treat that race. Come on that want to date.

Understand what is the archery if you are you sure you meant to go for it right here for centuries so I want to meet the budget conscious person even the smallest degree what is the matter I don’t know fear.

Do you.

Do you.

Well that’s the question.

Will come to let’s make the future a discussion about future trends technologies and their implications for human society we are coming to you from all over the world featuring the voices of Danielle Allen’s whale Michael.

And Michael Carey music and editing Christian Pelton in this episode Peter trend discussion topic artificial general intelligence with Dr Steven Larson and.

Brought to you by filling dot.

Get a fast.

Welcome to let’s make the future Michael Carey Our topic this week is artificial general intelligence so perhaps first our regular panelist could introduce themselves.


So the catatonia OK And in addition to our regular panelists we welcome two guests Steven Larson and Tim She Tim She is a Stanford University computer science student and founder of mock soul dot AI a machine learning social aggregation platform while Dr Steven Larson is C.E.O. of medicine Well a bio informatics software services company he’s a graduate of MIT in computer science and received a Ph D. in neuroscience from U.C. San Diego He’s also co-founder and project director of a project very close to my heart open worm whose mission is to simulate the body and neural network of The New Matilda C. elegans in a computer so perhaps Tim you could favor us with a bit more detail what is your background as it relates to artificial intelligence don’t hear you Tim in case you’re He I think you’re muted right now connecting audio OK no problem Tim still connecting No problem I know this is always a pain with these kinds of things to get all the details right with the audio connections maybe we should have a podcast about that before we can develop artificial intelligence maybe we need to actually be able to talk to one another or have a question hopefully if like we don’t even get like simple things to work already if I have been on the market for years what does I think if we have I can some point like good artificial general intelligence or not I feel like that’s all slightly interest although you see it actually a test like you know when they like driving on the streets and I was thinking about fatal accidents caused by an almost drive Biko I would have much more impact actually on like regulations and impact on the company and the development of the homes vehicles so maybe that’s just something everybody deals with as we are doing dealing with it right now yeah maybe just like how any idiot can have a child yeah maybe humans can. Make an artificial intelligence despite our own idiocy and we’ll just leave the AI to solve all our problems you know let the AI go to difficult school and you know do right by its parents and give us the wonderful retirement that we all deserve I think we should wait for Tim here maybe we can have a we can start having a conversation though and him can connect and then we’ll connect about his background here because I first actually like to ask Stephen a question about open worm but first maybe I should do the introduction to our topic so I wrote up a little thing here partly Cripps from Wikipedia artificial general intelligence is the intelligence of a machine that could successfully perform any intellectual task that a human being can it is a primary goal of some artificial intelligence research and it’s a common topic in science fiction and future studies and I think that’s Tim hello oh oh yes we can hear you who hates it just because I had to pick awesome Yeah that tends to fix things just reboot Yeah yeah so actually maybe we can just jump back for a sec because Tim I gave a short introduction about you as the founder of Marshall the AI but perhaps you could give a slightly more elaborate introduction of yourself and how you came into the field of AI Yeah sure so apparently I’m a student at Stanford and I thing durians research and yeah I laughed at first yeah and more during like an alkie research the past year or been working a project trying to feel like a general purpose reversal on an agent for the lab so the idea is like you can go to any website and use that I think we’re covering for a small army and tried to put it in tech It operates on the website so you can do Center tasks like looking at flights or using currency change or are just searching for a restaurant they want to go to so gradually want to automate efforts to greet you on the Internet and recently I mean what kind of project.

So I would go is to basically in the place to people upload their mission. Intelligence and it will be a place where people can share emotions colleges and three years of people as models have is that we should leverage that and meant national out in North America that’s a fascinating idea to make the underlying data that makes the machine learning algorithms possible or the insights that they generate possible and to make that data more available and I think we’ll be talking more about that later but thank you for that more detailed introduction That’s great yes so let me just give the general intro here and then we can get into some more specific questions including that detail about Stephen’s project so as I was saying artificial general intelligence is the intelligence of a machine that could successfully perform any intellectual task that a human being can but as I say it’s the primary goal of some artificial intelligence research it’s a common topic in science fiction and in my opinion the greatest feed humanity has yet achieved is landing humans on the moon in one thousand nine hundred sixty nine which was impressive on a technical level and is a symbol of humanity stepping beyond the cradle but less than one hundred years later it seems likely that humans will best both the science and symbolism of this feat as we might achieve creation itself and create a mind in our own image so I feel like what Tim was just describing in his work is a series of steps perhaps towards that goal that I’m interested in talking about today at a general level and Stephen you’re also working in a field that connects to this research in some way and can also speak to it but I’d be interested to hear about how open worm one project that you work on might contribute to humanity’s achievement of artificial general intelligence Thanks Michael Well the only instance of general intelligence that we have to look at is the one that’s produced by the human brain and trying to understand the human brain deeply has ben a great. History of biology and now science for one hundred years if you count the modern neuroscience a lot longer if you count the amount of time that human beings have been wondering how their minds work when you dive into trying to understand the human brain you realize how far we still have to go to understand it and how vast it is in terms of trying to work out individual mechanisms so several years back with a bunch of colleagues I started a project called Open marm which took a very humble step towards understanding any number of neurons in a whole organism and we picked a really probably the best studied Corgan ism known to man which happens to be a microscopic worms that only has three hundred two neurons in it and realizing that neuroscience still didn’t have a deep understanding of how these three hundred two was even produced simple swimming in crawling behaviors we said about building an open science project to try and unpack this with the idea that there’s no way to figure out the systems in human brain until we can understand some very simple network so although right now we are mainly focused on the biology of this much more humble organism the idea is that principles that we uncover in this network could be applied down the road to understanding the human brain and then ultimately what are the mechanisms that the human brain uses to produce intelligence All right that makes a lot of sense so I’m just struggling now to connect everything that we’re talking about together because it’s fascinating to see the different approaches that are being taken to take us closer to this goal so Whereas Tim has been full ing a program of research that involves techniques that are specifically optimized to solve very specific problems Steven is taking a wholly different approach going from first principles with biology and looking at a specific organism and seeing if that we’re going to. Might have some insights to give us on this past words creating human level artificial intelligence so completely radically different approaches and yet again motivated by that same goal so I wonder if either of our guests has an opinion about which approach we think will ultimately yield the most fruit is it good to have both is there a chance that one will become basically a dead end because we right now I suppose don’t know exactly what approach will lead to the promise land and the opinions they’re allowed to go first Oh I think there is an argument that like if you want to go and aircraft you have to study the birds Well you’ll have to like to go out aerodynamics and I think the same mind apply fair for out of share intelligence because I think the brain is such a very emergency that requires a lot of knowledge in your dynamics and it will be very hard to understand the brain every detail and to be able to reconstruct it from scratch maybe we might be able to figure out like the fundamental principle of that lead to the emergence that requires like a lot of that’s impossible to reverse engineer and only through like Alice experimentation of different presuppose that we try run a simulation and see if the principle can lead to an emergent intelligence so I’m more in favor of the approach of taking an algorithm and run a simulation or. Kind of what we do with neural networks try to optimize it for a particular go that’s my point of view here but I think that for our artificial intelligence right now people do look at is for issues from biology and especially the newer network LAN people have looked at a lot of inspiration is more on the high level principles rather than how the Matheson’s actually work and that is person who has pushed this forward last rites of the very nature of a neural network an artificial neural network. Was first inspired by biology but is there any role for biology anymore Are there any frogs in the rainforest like the pharmaceutical companies find proverbially speaking as far as artificial intelligence research goes or is biology not necessary anymore and the field can push forward without really looking to the actual specifics of how ion channels work and the neural network actually operates in a real human STEVEN Yeah the two areas are proceeding I think in a classic that got to me between science and engineering so artificial intelligence precedes mainly by people innovating and building on foundations of the past so it’s very much building on top of previous work where is the biology side and the neuro science side we are in a realm of discovery we are letting nature teach us lessons through direct investigation of the brain so I think the proof is in the putting as to which approach will lead us to the quote unquote promised land of general artificial intelligence I think that it’s hard to say before you get there which approach is going to be successful so I think that it’s best if you’re a betting man to woman to diversify your portfolio not all your eggs in one basket and see how both come to the chicks I think no one’s making an argument that we’ve learned everything that we need to learn about the brain and that’s just a solved problem and the same time I think no one is ready to say that AI has reached its absolute you know then there will never be better AI So I think both are worthy endeavors to proceed in the right I would like to take the conversation in a slightly different direction I did want to ask something more specific to the current program of research taking place in large companies specifically Google being such a leader in the field of AI right now and I’m sure basically everyone in this room is following news stories and. All in the progress that’s been made in self driving cars in image classification and the kind of applications that we’re seeing come out of companies like Google and I’m wondering if a company like Google has built up an insurmountable advantage at this point in terms of not just data and the vast amounts of data that has been able to collect but also personnel as opposed to the kind of personnel that might be operating you know academia and if this advantage is at this point insurmountable in the race to A.G.I. and if the real cutting edge of research is now taking place behind the walled gardens of private companies research departments rather than in the open space of universities I suppose that’s a little unfair I should say to private research given that they also publish papers but what is the opinion of the room here do we think that private companies really have all the cards at this point well with the recent events on the robotic side where you’ve seen big Silicon Valley companies hiring up old departments from universities I think that you see what’s actually in place is kind of a revolving door it may look one sided as a fight is only from academia into industry but as soon as industry starts to invest in one area there are pioneers that are still reaching out into new spaces in new areas and down the road they will be either the professors the future or they will start their own companies that can then either be purchased or take on their own advantage so I think it’s tied to again the idea that we don’t have a final answer for A.G.I. yet and I don’t think there’s broad consensus as to what is to it at the moment if other people feel like there is a definite path right now I’d love to hear it but given that I think people are still searching around for it there’s still an opportunity for those in academia to make a really big contribution going forward it just seems like the funding difference is just so dramatic though isn’t it like with academics you have to apply for funding and apply for grants in your. Always concerned about ten year and publishing whereas if you’re in a company’s research department I mean there’s none of those pressures there’s some pressure to I guess basic competence but I mean that’s a good thing I suppose whereas again you just have this all these distractions in academia and he’s just so much better funded I just can’t imagine that in the long term the university will be the place where the cutting edge research will take place I mean I guess I could be wrong but it just seems like it seems natural that it will take place in the research departments Well keep in mind that there are an incentive structures that govern both institutions So while you may be right that absolute dollars are being funneled on the industry side to a greater extent V. motivations that people have on the industry side can be limiting in a different way which is that even research and development groups within large companies like Google still have to justify their existence and they still will get cut if they don’t meet whatever level of performance they’re intended to meet whereas an academic suffers under Yes ramps and all those sorts of pressures but also if they publish in the field and make contributions that their colleagues final actually interesting they can continue a research program for quite a while until it hit so I think I can see me as the place where much longer bets are happening potentially ten twenty thirty year bets that individual researchers are making that may not pay off for a good long while you don’t see Google Making necessarily thirty year long plays So I think it’s just different and it’s I think hard to count academia out yet until you see some of those things on the biology side you’ve seen this just happen with Christer For example there’s plenty of pharmaceutical companies that could have potentially discovered this gene and then back in as I’m but it was actually the work of academics for a good three decades or fruit over there so I don’t see why in principle that can’t happen in the space of AI Well that’s fascinating and I wonder Tim do you have any perspective on this I mean you’re in academia right now are you bristling at my comments about the irrelevance of academia Well I think like science fair evolves in a wave. There’s like paradigm shift and there’s like continuous improvement Currently we are in a paradigm of using deep learning to you know like our show intelligence and we think that paradigm and I think private companies are very good at doing that because they have lots of compute and lots of data on the other hand if you were expecting for a new paradigm at some point and I think that academia is the right place for that to it for just like advocation was or an active Yeah twenty years ago so if you really are in a race towards A.G.I. I don’t think that we are in a right paradigm for in terms of algorithm yet we are expecting a new kind of approach yet to be invented maybe place fired that won’t lead to like more intelligent machines that’s a fascinating and tantalizing remark to him because I really wanted to ask you specifically about your work with machine learning and what you think the limits are of that technique because it’s been pushed so far perhaps the most dramatic use of it outside of perhaps driving cars was last year with Alpha go defeating one of the leading human players in go police at all and it was fascinating because you know all throughout my university days in the early two thousand as I was told by my computer science professors of course that go is this insurmountable challenge to computers be impossible given the sheer number of possible games and so no algorithm with the pruning strategy would work but now we’re seeing that basically any problem that can be well defined and that could be solved by a human with its neural network it seems to be something that could be solved if you throw enough data at it or if you make the computer play itself in any given well defined game you can basically get it to play as well as a human can and in many cases much much better so I’m wondering if this technique is a sufficient paradigm to get us to A.G.I. I guess you’re saying no but yeah that’s really my question is What are the limits to our current. Basket of techniques so I think machine learning is a very general term and within that general scope currently people have being intensively using supervised learning to train like MOTOS to form narrow scope tasks so these narrow AI’s have been wildly successful because you could just throw lots of data at those models and they will be able to learn from data and achieve a very good accuracy or task if you’re really on a raise towards A.G.I. what we really need is an intelligence that not only for narrow scope tasks of be able to channel realised in your chest and kind of like a human do that still requires machine to learn just like a human there but we humans are able to learn from there a few examples so we are it seems more or and we’re out of trajectory of continuous learning we go to power previous experiences and adapt our capability to new domain very quickly so I think Mississippi’s incur machine learning is capability to transfer learning for a while be able to transfer from one domain that the machine already knows to new domain that it hasn’t and all of the work they’re currently doing they are just defining a problem and trying to get the machine to learn to solve that problem that’s very different from the snare of ocular learned during like a continuous lifelong learning experience so really just like having a machine to learn continuously every day and building up our experiences to saw new tasks and eventually tour is like a general purpose intelligence so the way that a human learns the way that I learn you know I encounter information and I slowly get better add coming up with solutions to a given problem and it seems like machine learning is doing a very similar thing it’s following a very similar process so I still don’t quite see how that couldn’t be just further and further generalized For example if you were to supply some of these algorithms with life log or data from people you know recorded from babies or something from where the day they were born you know you’re giving them the same kind of. Information that a human would receive and you’re giving it because it’s unstructured input in the sense that they’re not able to understand strongly think of like exactly how you provide the input to this desist and but the radically you could give it the very same input it’s just a data issue give it the same the very same input that a human gets and so is it a data issue or is it a fundamental problem with our algorithms that it’s just not going to generalize in the way that humans brain with all of its very structures is able to do well in your example all humans are exposed to like a stream of videos but the back for learning is ferret wheats it is from start to think of the people they interact with or is it based on like star no sensory information but overall we don’t have like a clear objective that we’re trying to optimize just like what we currently what most supervised learning techniques do and I think it would to have learned from a very weak and noisy sick nose for you to figure out what that signal is is a very challenging mission when you see even you think there’s some role for biology here to improve that and what is it that AI researchers right now are missing that biology might be able to add to the picture well not just biology but in the ninety’s I came across the idea of and bought into artificial intelligence where work with robotics began to cross over with a I am the idea was sort of Spawn that interaction with an environment was actually critical to being able to understand how to solve problems beginning to solve problems in a more general way and I moved forward quite a lot when you began to look at the problem not just one of calling in data and building models but also interacting with the world so biology is natural and bodied whether it’s a worm or a human we all sort of grow up in an environment that we must interact with in tight loops so I think that perhaps after we get out of the deep learning paradigm which isn’t so much about embodied and we go back to that if we may recognize that it may be hard for us to come up with more general intelligence if you aren’t using an agent that has the ability to affect. The world as well as collect data on the world that’s a great point I see maybe Michael has a question or comment My question is in you know we human we find it easy to use our imagination you know even poll if we’re trying to do when you attack the government comical or living you cause that we pretty much use my business in a lot and that way we do not need a lot of data any idea what question is how much of what we’d being done right now to kind of we prove my thing to do it would be that I would be double or to grief something new for example if you look at invention and easily lead replicate them that way at him and can see different something completely in me from that yeah that’s a great question does anyone want to jump in on that though I think imagination is one of the fascinating areas that I haven’t seen as much work going into you seeing a little bit of it with against on the projects of Google reading and a lot of images on the Internet and then using neural networks to produce images that don’t actually exist that are kind of cross-overs eye level crossovers they sort of produce a dream like images but I still think that’s a far cry from true imagination I think imagination is one of those topics that is mysterious because it’s one of the higher order functions that the brain produces it’s exactly one of the reasons why I think that further research into the brain is necessary to understand those mechanisms in a deep level I think we don’t need data to do that because we spend our entire lives building up the data for us to do imagination and Wes and I think that our brains are naturally given to imagination as you can see in children so that’s not all the time kind of in a fantasy you know imagination space so it’s definitely a core feature of human intelligence that I think is not still pretty not well covered in AI and a lot more working go into to say one thing about imagination two which is that when folks try to build this into a system that you programs I think imagination often can only operate within the boundaries that you defined a system. The How so if you define an AI that works in chess or that works in go or that works in a blocks world you could call thinking ahead in the game as an imagination but again you’ll only be limited to thinking about chess boards or thinking about go boards or thinking about a blocks world I think what mysterious and interesting about the human brain is that it seems to have this unbounded quality where it could think about all of those things or a whole other day mains that you can just kind of make up as it goes along so that’s one boat in the camp or continuing to learn lessons from the real grammars too many thoughts on that creativity I think currently all the might surprise there are new or a really first milling techniques they mostly are trying to frame intelligence in the frame of solving optimization problem and we haven’t really like tried to go into the creative take part especially because we don’t have a very good understanding of what creativity is and that makes research really hard because you don’t have to clear definition of the problem but I do think like yes as steep as mentioned some of that work at Google is fairy interesting they could generate a dream like images or magenta project they’re trying to like generate music but another point I want to make is pretty sometimes happens in a human lentil A happens when you keep practicing some instruments or as some skill like a long period of time and then you suddenly become a master in that scale and that’s not very different from like say we had a mission to huge trainee set and make it keep learning to do that thing until I could become pitched out Britain to do that so exactly as I’ve seen people try to train like it generates of model textbooks or in novels and model is able to generate sort of similar style text that minute that writing style of the book so in the S.S. machinist be able to do like small and modify creativity from like that kind of style but we’re still like a very long way to words like very fundamental shifts in like critic City where it’s a humanist to OK I thought it was funny that if you think. If it was not clear to you why AI terminology is called The way it is after this episode it will be stuff like training as Tim explained there in detail now but also basically of a deep dream like that became clear to me right now thinking about imagination where you tried to what you learned from one data said to imagine the same thing in another context I don’t know I never really got the naming really there but I wanted to ask as I feel like this discussion like Problem solution based research on AI and biologically academic research so we didn’t really talk about the combination of the two I think that we probably something really interesting for example when we were explaining Steve about the open war in Project commute really I mean that thinking about general intelligence or about the brain are right biologically there are multiple parts of the brain and they are all more specified to do something particularly or better than other parts of the brain is that something you have been researching that how to maybe like combine marrow artificial intelligence using biology into one more general artificial intelligence that can then see some of tenuously use different AI’s to make a more broader A I is that something you think that the two appeals of research might converge Yeah I think that’s pretty insightful obviously at this point in terms of useful applications it seems like the engineering based approach is taking leaps and bounds ahead in that you know voice recognition in the last ten years has gotten dramatically better just as one example and I do think that these are all tools obviously that feed back into the study of the brain so in a form for example we are using machine learning to help us better understand and to fill in gaps on how biological neural networks work in that case you have a lot of missing data even though we are using sort of the best study organism there’s still missing data and so AI is helping us actually that understand those biological networks so in some sense the space unpacking how the human brain works may be so complex that we can’t help but bring our AI friends. They had technologies to bear on answering that question and yes that will then feed back into hopefully a virtuous cycle where what we unlock then lets us build better technology that then UPS’s figure out of the things I just did want to say you know when it comes to general intelligence or creativity or imagination or even how humans learn to do things one thing for sure can be said Well and of course this to some extent touches on your belief in spirituality but others put it this way nothing in there oh science as ever shown any evidence that’s inconsistent with the idea that everything that purpose is your environmental life is the product of the activity of the cells that are in your head so all those things creativity and imagination learning all those things are happening as a result of neurons taking different states of activity in your head right now so I think what’s tantalizing for those of us on the biology side is to say well you know we can debate about the definitions of creativity or whatnot or we can just go about trying to find out and look inside the brain to see what’s happening when you are creative or to see what’s happening when you are imagining and that kind of settles the debate for all time what we make up with fire that may be very clever and maybe come pretty close but until you see the ground truth and you know the strong way that it’s actually happening inside the head we probably won’t know for sure how to define it in the best way so that also makes me think about how in the future if we want to give rights to AI’s that we develop let’s say to him in ten years develops an AI that he claims is a general intelligence and deserves all the rights and privileges of humans I think it’ll take some of Stephen’s work to demonstrate that the mental states inside the algorithms are inside the software the Tim has written actually correspond in you know a sufficiently isomorphic way to the states that actually exist in humans and that therefore are deserving of moral consideration so you may have to work together at some point these are two. Example it’s Stephen one question for Could three cation when your comment about how imagination and everything is or how everything is just basically a product of computation in your head I mean the doll so implies basically that everything can be learned imagination is also just a product of data isn’t it can be said that more when you say Imagination is a product of data I mean so they’re actually when I think about imagination it would probably be something where you have a lot of data but don’t use it in the like usual way to optimize something and get like very specific results but using the data in a more like anomalous way to maybe make further I guess I’m really trying not to use the word creativity right now because the look back to earlier but I guess what I’m referring to is just a twist in my head we would just get these just asking about creativity but basically if I was wondering how much that can be deduced or done just by using data Well I guess we understand data in a way that is very much tied to our current architectures for computing So when you say data I think a lot about what you might store on a hard drive the way the brain may represent data may be quite different in some stand is although we can make these analogies but I think one of the reasons why things like rate of an imagination are hard for us right now is exactly because of the way that the brain may store information so for example when you have a dream at night most of us these dreams are things we haven’t seen before and some of us have recurring dreams but the leading ideas in neuroscience would say that you know your dream is taking things that you experience in the past maybe things experience in that day and kind of mixing them all up in some ways we think that’s consolidating some of those memories so that you’ll have them for the future but it’s also kind of a replay of what happened during the day and some people believe that imagination creativity is very much in the same way it’s a replay or remakes of experiences that you’ve had in the past and so when it comes to us trying to simulate imagination or creativity it’s very important for us to understand how it is that the brain goes about storing the experiences that. We’ve had in the past in order to then take the next step to say well how is it that it goes about remakes and that how is it that it goes about recombining what we have learned in the new ways you might look at creativity as anticipation of things that might happen in the future exploring possible futures taking the knowledge that we have now and projecting it into the future to imagine all the multiple ways that things might happen down the road so you could look at that as making different possible futures based on current episodes so whether or not the way that we think about data right now is sufficient for magination I think the jury is still out I think we have to explore and understand the way that that data deeply gets processed by neural systems before we can fully understand what will be required for understanding imagination great yeah go ahead Michael people just jumping right.

What happened in that what it was going to.

Kill so you can tell them to play piano you can lead a lengthy defense that you know I cook already and that for me would be the future OK guy and also a case that given the fact that a lot of people the way high going to do it all right now people because.

It was that and the part of different people doing different and electability like in the future will I’m kind of thinking that there might be a need for some infrastructure for its form of open a high gate our A.G.I. data machine that people deface the book and do all the good the way you have that happening where you can feel like that machine and you see the whole of the vaccine and be so some hope of a collaboration I don’t want to hype but right now is that if you know that you know I am one thing that I don’t know how useful that I think I’m still creating difficult for people to tap into.

But the so the question is what kind of infrastructure do you think and you can go to that even though. Do you think it would need to create because of lead in from. I think Tim would definitely have a great response to this one may be a great chance for you to talk about Mark so yeah I think traditionally because private companies own the data that a lot of the models of where things are like gardens a private company so if you look at speech recognition or up to protection Jeff want Google or Microsoft or other big companies have the best A.P.I. Yes but I think like if we were allowed to benefit like the public and not just controlled by like a few private companies really likes a sort of equivalent of open source movement in the show me that we really we need to open up the data that you’ve acquired to train this general purpose if you guys are general purpose intelligent now great times that’s probably one of the reasons why I started MA so is to help that we can go to community researchers and developers to share data and model train from those data so that we kids democratized access to our cities and people can freely use those. Applications and enjoy this and high accuracy benefit of those models that’s fantastic So the final question for us takes us in a broader direction we’ve been talking about very specific problems getting to a G.I. and then the specific issues relating to the development of a G.I. Now what about the implications of this technology to human society in general and I wonder if we do achieve this goal let’s say in our lifetimes will the development of A.G.I. cause power to be more concentrated or more decentralized in human society we can imagine A.G.I. could lead to intelligent agents that act is the kind of personal Jarvis personal assistant that could help defend you against the onslaught of spammers and all kinds of other attempts to defraud you or other issues help to guide you through a more complicated world on the other hand A.G.I. could also help governments to track citizens on a massive scale. It’s not quite a G.I. that would be doing that but just the general techniques of artificial intelligence would help them to do more and more advanced pattern recognition for the purposes of tracking people it could also lead to the creation of super soldiers that have agency in the very same reasoning abilities as human soldiers so we can kind of see this going in two different directions towards the direction of power becoming more concentrated and in the direction of power becoming less concentrated So what kind of world human society do you envision as a consequence of the development of rights a bit biased and I think that if you look at the sort of history of evolution complex adaptive systems you often see a trend towards more organization rather than last however that path isn’t a straight line so you may go through disruptive periods where there is more decentralization I think that we see on society as a tension between these two things so the introduction of a disruptive new innovation or new technology like A.G.I. it’s I think a chaotic event poor human history I think it’s one that causes realignments that it’s not clear how to project which direction I’ll take I would say though without giving a straight answer that we all have to contemplate this along a few other dimensions than we commonly think about and I think the two to come to mind are political and economic So on the economic side I think we have to think about who benefits from A.G.I. and what is the value and the need that is created by it does every individual need a personal a I to do something for them that they’re willing to pay money for or is it more that alleviates prefer to have an end to provide the service or leads I guess that sort of economics on political side I guess I’d say how much autonomy do we think A.G.I. is are going to have how much autonomy do we think we want to allow them science fiction has many different versions of this ranging from complete three walking essentially another species living alongside us and others have more of a slave race underclass. Kind of a perspective and then others have you know perspective that they are limited in having any off control or free will and they’re just basically directly serving us so it’s unclear which versions of those things we’re going to have and it’s possible that as A.G.I. plays out we will go through areas where all of those things occur and we go back and forth so I think when Want to Madden’s the impact of a G.I. one may have to think about that it’s not going to be a single end state that it would be a continually evolving technology that would play out over quite a lot of time as it interacts with the interests of human beings as well as the interests of elites within the human society Yeah that’s a great point I think or the question of whether the human or it’s going to be organized more centrally organized decentralized way I’m less worried about like the way that he was that is can be organized but more about the relationship between machines and humanist because imagine a scenario where you truly have intelligent machines then having control of those machines is very crucial point and and live a worry because we don’t have any research or approach that can currently guarantee that we could abit control of those sites and chase Meeker couldn’t predict the future but if you look at the human history like that arrival of homo sapiens something changed in our genes that made our so that if it different from our same place and then that change actually made our population grow much faster than the other species on the planet and eventually a human take over the planet kind of scenario and if we have intelligent machines that are more intelligent than human then will we keep that missions as seven of like servants or pets or the other way around so that’s the point that I want to like not sure about yeah I think Tim You did a great job of addressing the scenario that Stephen didn’t mention which is the scenario where a Eyes are the top dog not that the eyes are slaves and not that a eyes are free citizens like we tended to in. Data but instead where they’re the overlords and how do we control them and certainly that’s another scenario for sure now which one will come to pass I guess Tim you’re going with human history here you’re thinking that that’s going to be the case I’m hearing and Stephen is a true pundit hedging his bets there I pray to conclude this.

OK fair enough I do think this is a fascinating question as far as what happens here I suppose in the best case scenario the eyes do end up with supremacy but they leave us to our own devices and then we can decide how centralized authority centralized we want our society to be after they’ve handled all of our scarcity issues for us and then we’ll say oh yeah sure I’ll be one thing I guess if not to heads about I would weigh in on the side that we don’t have to worry about overlords taking over for the time being just because one has to imagine that AI has to survive in the same natural world that human beings and all of the animal kingdom is and right now the instantiation of AI is still in machines and those machines have physical limitations like they wear out they need electricity they don’t reproduce themselves and these are things that nobody is proposing yet that we build into AI So at least for the time being in the foreseeable future unless we make AI reproduce itself unless we build computers that have infinite amounts of energy and never require more energy to put in less we teach AI to gather its own energy from the environment and once we teach it to repair itself and rebuild itself I think that we still will have a fair fight in it I think a lot of the science fiction cases where we imagine that AI’s have taken over from human beings tend to either conveniently leave that out or don’t really explain how it is that the machines are able to solve these basic problems of sort of resource gathering even the matrix I think had a pretty poor physics explanation that it sort of uses human beings as an energy source which I don’t think is. Fully practical and the deeper explanation so until we see that coming out I think that we’re still safer than so but safe I mean you just talked about damage done on intention Meaning for example by over roads but what do you think about damage being done not intentional I think might be a very natural thing in very many scenarios but what’s your opinion on that when you say breakouts what you mean by breakouts I mean for example AI being programmed to do or trying to do certain grow and in order to achieve that goal it will start doing unpredicted behavior that might be damaging whatever sense to better achieve that goal Well I guess what I’m saying here is that the mechanism through which I currently can affect the world are mechanisms that humans built and so while you’re right maybe as Skynet scenario where AI launches all the nuclear missiles that we have built that’s probably something that we would want to worry about on the other hand the fact that we are the ones that have built all the things that AI is going to manipulate gives us I think a fighting chance so we can choose to not give full control of our nuclear arsenal to AI or we can choose to not build our F. sixteen so they can be flown and have missiles launched by systems we can build in failsafes that only humans can unlock and we can sort of build back doors into things that kind of thing so until the machines start building themselves I guess what I mean is that yes there’s a chance for bad stuff to happen but I see it as bounded by essentially our own by human agency and human you know stupidity if you will for the time being right I think the lesson of Stephen as well as Blade Runner twenty forty nine which I just saw last night is for the love of God don’t let the replicants reproduce that’s the key right so tell I’ll just give you the last word and then we’ll finish the conversation in case you had something final to say well I would say that we should definitely be optimistic about the research in general because it’s really kind of beneficial to just Saturday overall if it. Look at the curves like a chance of actually give us an arrow even if there is just like to be point zero zero one percent of to you know where we shouldn’t forget about that next March a small probability to enter scenario and we should prepare for the worst agreed All right so just before we end the conversation I did want to give both Steve an intimate chance to tell our listeners how they can find out more about them respectively so Stephen maybe you could give a link to your works sir you can find out more about the project to build a virtual organism in a computer and a simulated nervous system at open worm dot org And you can find out more about my software company at Med sell dot us untasted and Tim How can our listeners find out more about you because you’re finding more about my research at Jim She checks I say and also you can look at like the cool demos of models on a lot so I go to Mark so I would love to hear your feedback on this project I love both of your creative uses of the new top level domains that have become available over the last few years so anyway yeah thanks so much guys this was a fantastic conversation I wish you could go on forever but I really had a fantastic time so thank you to Steven Larson thank you Tim She thank you to our regular panelists Daniel and Michael as well and we’ll see you next time thank you so much brighter one was a lot of fun thanks.


Let’s make the future these it asan line at let’s make this.


artificial intelligence (14) private company (6) neural network (8) artificial general intelligence (6) machine learning (7) human brain (9) general intelligence (11) A.G.I. (13) narrow scope tasks (2) general purpose (4) human beings (7) regular panelist (3) G.I. (4) human society (5) science fiction (4) open worm (4) intelligent machines (3) biology side (3) human history (3) computer science (3) solve problems (3) driving cars (2) missing data (2) intellectual task (2) great point (2) specific problems (2) deep learning (2) common topic (2) primary goal (2) data issue (2) pharmaceutical companies (2) industry side (2) large companies (2) supervised learning (2)






Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s