Three friends discuss the prospect of lifelong artificial intelligence companions. This idea was described in detail by Jonathan Mugan in the article You and Your Bot: A New Kind of Lifelong Relationship.
Recorded 14 May 2017.
Episode Machine Transcript (unedited and uncorrected)
Michael you let me guys OK I’ll start with intro OK.
Welcome to let’s make the future a special with three friends of a pew trends Technology Center implications for human society we’re coming to you promote all over the world let’s introduce ourselves I KNOW ME FOR Thank you Heidi at techno piano player from Iran currently living in Michigan what being on by a medical devices and future cyber technology I’m Michael Carey an independent software developer and entrepreneur from Canada currently living in PARTS UNKNOWN I’m Daniel the answer and attention and social and they can do says currently best of Munich Here’s the format of our show in first talk about the latest feature related new then we’ll discuss the particular future oriented topic for about thirty minutes if I don’t ten minutes I reserve elevator battle let’s get it started so this week it will jump right into the topic right because there’s no major news right yeah all right Dan you so the topic is in a certain article that talked about how we could use an AI as a personal assistant and like in a much more personalized way not just having a system meeting for your suggest you the next word you want to say but more like an AI that comes into your life when you’re a baby basically and will accompany you throughout your life and will know extremely much about you so we will be able to like tell you or teach you a great example concepts and the words you know and where you will understand best so that we can talk about technological social implications of that we can talk about what we think that I you know does anybody ever sort of you know it’s you know so Daniel what you’re saying is today’s topic is you and your bar it’s a new kind of lifelong relationship one that we’ve never experienced before and that we’re Aba be about to experience so you and your buy a new kind of lifelong relationship is a blog entry by Jonathan Bergen who is you know it deeper and. And he basically broke down what relationship we might soon be experiencing with our personal digital assistants so the idea is such an assistant could be much much more than just something that buys things for us like Amazon Echo does right now or even something like answers we keep Pedia questions like Alexa does on your phone or Siri giving you directions instead this would be a bot that collects all of that massive amount of data that you’re admitting at all times and that your phone is collecting and giving to Facebook and other corporations right now but instead this spot is collecting it and using it to be your guide your advocates throughout your life and potentially radically altering individual people’s lives and I can you can just imagine it like the mind starts to race with possibilities if you had a super strong infinitely patient AI that was sitting right beside you watching everything that you’re watching and experiencing everything that you’re experiencing from cradle to grave just imagine what kind of insights and what kind of applications would come out of that yeah I think one of the biggest advantages of you AI but throughout your life from childhood is the tremendous amount of integrity that it can introduce to the lives of people because one of the challenging and factors in growing up is to maintain an integrity in our personality I myself get a lot of times kind of moody and depressed sometime I forget they accomplish means I’ve had I feel bad about myself and I make bad decisions because that momentary mood that I have I can really understand and see that in patterns throughout my life the things I’ve done and the challenges I’ve been through to get those lessons if there isn’t but they can always keep record of the patterns in your life and see the bigger picture and always be alert because that robot is not going to. Be Depressed and get forgetful and things that are like humane so I feel from that perspective would be tremendous progress towards integrity for a human person out but it I like that point I think though that there is I mean right now we have many solutions in terms of personal assistance and I feel like at many points the article does not really actually like say you know like it’s just further extends what we already have to day with just as one combined in one solution covering everything because if you think about it for example when you use your keyboard like my keyboard I have it over the collar ready for I don’t know three years it really ignores what I want to same and also probably knows what words I understand and so it’s always we always have assistance center at the comp and us for some time and learn about that time so the idea would be to just extend that even more starting at this child’s age but that would make an extremely large dependency and also it would make it much harder to have a good market there because you would commit to probably one of these service providers and then how would you deal with that I don’t know like I’m suddenly feeling I mean I’m unpleasant like I’m going to be about the end in that sense I’m not sure why we have to assume that there would be barriers to migrating from one personal assistant to another I imagine that if you’ve been with one for ten years it might be an emotional experience to change from one to the other or maybe it would be a smooth transition because you would be able to copy over the voice or something and many of the personality traits and you would just be swapping it out for a faster model or maybe just like right now when I want to switch my you know document as being software for Microsoft Word to Google Docs Google Docs tries hard to make it easy to get a migration because they read the old documents and you know have the same key board shortcuts the tension that sort of thing so I imagine the migration is maybe not as much of an issue most. I think but I do think that thinking that we already have this in terms of like anticipating he completions and things like that I feel like that’s I mean I know that’s not all you were you were thinking of the implications Daniel but it seems like there’s an order of magnitude more than any I would be able to provide and just making like a productivity and hence the tool they could do things like be an emotional support like Haas is talking about maybe the best way to think about this guys is to just remove all the technology from it and to think of this is like your Neil Armstrong on the moon and you have used to roll in your ear at all times and they’re watching what you’re watching and you have a team of scientists and I call just everyone else people watching your vital signs and they’re telling you when you need to take rests when you need to eat they’re telling you what to do when to do it of course you don’t want that you can tell them to shut up or something but what I’m saying is if you have to think of this is some like exotic technology we’re basically talking about like a mission control in your year that is made possible that isn’t paid accessible to the average person through the cheapening technology and AI because obviously paying for a whole issue control costs billions of dollars Nobody can do that but hey I could make that basically free so I just take some maybe a little more so I think the market right now looks if you have solutions like you have all of the problems to deal with and you would need maybe assistance with broken down into smaller solutions some have I feel like a more efficient market in that sense which is good I guess because then you can have people specialize more in different areas I think the major difference from the article to what we have now is basically the data and the getting to know you even better part just to get more examples on for example of how you do you know is Google now on tap that’s like that goes really in that they’re action that like not having it on that you’re but having it and the system watch your screen that means when you’re on your Android phone I just press one button and and right well analyze the screen and we’ll say me or tell me what probably will be my mix so if for example there’s an address there will. Or a word that I don’t know it will just tell me to suggest me too with the click of a button to Google that word or to navigate there are something so I think nice it is really over the difference the mind shift is in having something somebody that bird like they’d acquisition from day one basically and at all times and through all channels Meaning for example I wouldn’t just him but my phone suddenly listens to me at all times every conversation I have with any person and also when I’m sitting in a cafe or something so I think that’s like the thing that also makes it slightly creepy but again much more powerful I feel like that what we’re getting here is all the power that you’re talking about with Google but minus the creepiness because you have control over your data we talked about this a little bit in the last episode how there is a potential for changing the way that they Data happens rather than being in a cut city of large corporations instead if individuals can all that data not only is it safer but maybe the implication that we missed in the last episode is that people will therefore become more comfortable with sharing even more of themselves into the giant maw of the big data algorithm and if people feel comfortable with Sherry absolutely everything down to their power who wins every conversation they have into big data I mean potentially the benefits could then be commensurately larger that’s such a I can’t believe I missed that aspect we were talking about blocks and technologies because we were already quite often actually talking about how maybe you are a potential perfect well would include giving away pretty much all your data and you’ll be the happiest because people or companies will be able to serve you the best and it’s an interesting connection and allow or enable people to actually be willing to spend more on their data and give away more data so I think it’s just mind blowing thinking about going with it one of the social ethical implications that I can’t think of is up already of parenting because right now parents don’t easily give up on their ex or a P F. Lying their perspectives of life to their kids and having an AI growing up with a kid it’s basically giving up that Tori having that AI to have a conversation twenty four seven with your kid connected to the wall AI which I would call that universal which is Google which is connected to basically all the information every human kind called The Way to if it’s a mom or dad so I think such an AI if it wants to be implemented that to grew up with a kid it requires very sophisticated and detailed and very personalized control panel for the parents in order for the parents to keep their authority to control the information and then you play the channels that the kids and the AI can be connected with Imagine if it’s like already we have it to have parenting applications on the Internet that they can feature certain things to the devices that their kids can use so I think it’s really important to keep that in mind when we talk about this aspect but at the same time this article the way it’s very unfair sizing on the language part which is just very slight has spake of the whole idea of having a have because the kids all or he have an i Pad He has access to Google so they can all of the use Siri and get information but here what’s the friend is there’s two things first is that intelligence that ten has the ability to learn the personality and keep data and categorize and analyze data that’s intelligence part is the novelty of this idea the other part is the language part the language that speaks to that kid and has conversation and evolved so I give these two points are to new novel parts of this idea other than as we have the other settings I guess yeah I can agree that parent. Well authority is definitely being questioned or let me say this there’s a conflict between the primordial parental authority and this new a I that know is better potentially Now I imagine that the vast majority of parents want the best for their child and what that child to do as well as they can in the society so I can imagine this would be an amazing boon for poor parents for example because they could give their child the absolute best education and you know basically My Fair Lady you know the rain in Spain falls namely on the claim for all of these poor kids that would otherwise end up in a sort of a backwards situation when they’re being raised and not able to get the same exposures and the same guidance that they otherwise would be able to get but I agree that some narrow number of parents may be religious or otherwise would want to tell the AI not to say certain things and it may be that for the first few years maybe it’s just silently collecting data rather than having any real influence on that child’s life and I think this goes to show you that it’s very important to think about who has control this data there might be government rules that would be necessary like even when you’re a child your parents aren’t allowed to you know know everything about you there was a big controversy in America when you know it wasn’t even America I think it was in Canada actually and it’s happening Canada that one of our politicians said he would be in favor of forcing schools to disclose to parents if their child joined a L. G.B.T. group in their high school and the argument was well actually that’s a bad idea to do that because your outing this child to their parents even if the child doesn’t necessarily want to out themselves to their parents and maybe the parent might hurt them etc So there’s a lot of questions here in the parent of the AI idea just about how much do the parents have the rights to all this data that the child is creating and likely an extreme curse Beck. So you had that you had so many to cookie dough in that we still this just focus on like I don’t know maybe like ten percent off or twenty percent off the lifespan of a person and which is spend the childhood so that’s just want to spend the article by really like parallel which I think is absolutely a good idea to think about what with the parents do with when they had a personal teaching a system for the child during the first couple years how would they then feel about giving up a part of their authority and you know thing with Haas a few mention with the channels that you can control what information the child consumes that’s I mean I would still give the parents control of what the child will consume and that’s also something which is I don’t really know how to house or print so I’m not a fan of protecting children so much and protecting them from different views I was think like exposing them to a lot of diverse views and also letting them make something that helps them grow very much and I think that’s something one should be aware of to read if you give the parents a power to control so much what that child will consume that will actually have a bad effect I guess when they stream purse pect if one thing that comes to my mom and and it’s kind of scary is that if this type of technology gets implemented that to its full potential without any compromise it would create a society that parenting is completely centralized think because when I think of an A I mean I think of it data I see a unique very soul A I seeded the one universal complete data set which is not personalized by humans it is not effect that and controlled and manipulated and selected by the sensor lies human choices I see a universe so thinking of that guy hit I feel like imagine like a million kids can kill and guide it and thought through their lives by that universal I What happens is that those million people they just become the same type of personality. They get influenced by the same universal because unity is very minor in diversifying the personality influence is so important that influence throughout and nurture is centralized it makes humans very identical and I think one of the aspects that diversify that human is that diversified and the center of parenting yet the nurturers that you get are so Evers that makes those human way it would like create a unified and very identical human beings which doesn’t sound very interesting to me Hoss I could not disagree with you more in a certain sense in another maybe you’re not quite so far off but in a certain sense I think that everyone having their own personal tutor that guides them through the education process and teaches them the things that they want to be tied is the exact opposite of a uniform education system that was I think first pioneered in Prussia in the nineteenth century and spread to most of the developed world they were everyone’s in a classroom following a curriculum and now with national governments passing laws that say what’s supposed to go in the curriculum in America for example in other countries right now it’s exactly what we have every child gets a one size fits all education with this system instead the child can be taught at whatever speed that that makes sense for for that child and can be taught the subjects that interest that child so I feel like it’s a fully customize education the kind that we could theoretically do today if we had an infinite education budget where every child has access to tutors but this is just making that cheap enough to actually implement so I don’t think it’s a uniform thing what makes you think an AI has to be the same for every single person I mean the whole idea here is it’s a personal assistant that molds to your personality and your interests I think I think it might lead to that thought is or apart from our house but this that there would be like a single best. Solution in terms of teaching and developing meaning that this will be the one that them gets implemented and leading to a more stream of education leading to a more like one size fits all solution but I think that’s also where it kind of the problem or it like the connection between your two opinions comes in because why this allows Like here’s an interesting thought this gather so much data so last benchmarking ago it’s like maybe figuring out best practices because everybody’s different as Michael saying and so everybody would need a different education but also is of precisely problematic because we don’t know so much about the people the psychology and so this would basically mean that we would have to like a new way to conduct studies because you have gathered data personell you get the data about how they’re being taught and you’re being you’re you get the data about their success in their careers which is something that is no like a very sensitive topic in the psychology and you can bet computer predictor is benchmarked but everything like in a very precise context so that might be interesting connecting this to it can bring an example think about this situation in that kit wants to say that hey let’s have some fun let’s have do something you have any idea and then they would make a decision if it’s the normal divers and defense lawyers pathing is one bothered like baseball said you know what that’s too baseball and about it would say let’s play basketball but then the AI How can the AI make a decision what’s fun they’re like a million types of activities and they can be influenced to take it but then I think what can be helpful here is it define a function which have played as a randomness to decision making because there are millions a way that a lot F. estate heavy shape and how can those life if they can be influenced to a kid that has no cream friends because when kids are born and they’re not born with their preferences the Preterist is are shaped by the guidance influence they get from other human beings. Around but in if there’s an AI that has the most influence on the kid how those preferences are going to be shaped and influence that’s the big question that’s the million dollar question when you think of those as guiding hits and that function that applies their randomness is very very very important part of this technology I think I also think that’s crucial but I would think like I thought about this earlier when we’re talking about the parents because I feel like that’s something the parents will be probably deciding because the parents would say to this kind of systems approach what are the values Lamport to us they would say for example support that he gets from the whole all kinds of different sports and people and cultures and political ideals or whatever and such like that’s something the parents will decide because maybe they want their child I want to play baseball Yeah I think the most useful way to think of new technology is often times is when we remove the technology aspect from it and we just think of it as old fashioned as possible like a king’s son used to have a private tutor right that’s how it worked out for if you are a rich person you have several tutors Now how did those tutors decide what to teach the child Well presumably they had conversations with the came in the quick and they asked them OK well what would you like your son to be taught and so I feel like just like Daniel saying I’m sure that would be what happens but hosp I think you’re right and this applies not just to the AI when it’s in child mode but also when it’s assisting you when you’re an adult and that’s I think the most valuable part of an intelligent agents is not so much the information that they have because of course we could just you know Google it or search and we keep P.D.F. for any amount of information today the useful thing that intelligent agent has is it can provide context so we can in the context that you are in right now it can tell you ten. Really what is the best thing to do in that moment it can make decisions in that moment for you and to me that’s one of the most useful things like self-help I feel like there is a million books out there telling you obvious things that we all know but in a given moment in time there’s probably one thing that you should be doing or not doing and having an agent there to tell you what to do I think would be really useful exactly putting things into context very often means you are writing to new situations and will be great for example what I’m a fan of is like these briefings she’s how if you know them but for example your house would go into a let’s make the future meeting you would have only two minutes of time to have all the information he debatable and yeah that I would just come to you like I was part of the meeting and say OK this is what happens and Michaels and then Daniels and our AI assistants lives this week this will be the topic this will be the new stuff for you this will be of stuff you know already. I don’t know so it will be a time saver and like and sure quality of your life and your actions will be higher so it’s actually well think that context so you could go ahead I’m so sick is accept my apologies I have to leave right now so with that man in conversation so see you next week you next week asked by OK So can I rev up the discussion with that last dollar if you want to discuss it well remember who we think it is anything else that we’ve missed here because so I think there’s one thing that you have a pretty extreme thing that makes everything sounds like the creepy And that’s basically because we are programming I for everything which wanted to give one name what the I actually learned in the comics we’re talking about right now how to be human you know like ip one like go off the AI to understand because how will it otherwise if you all that they permission to be at the right time at the right place or whatever because it knows how your brain works and so that might be less likely you frightening thought if you think about it may I being with you or being with it from a child on it will understand how were they will basically. What an eye does is make the brain or it’s maybe not as good as the human brain and transferring thoughts from one concept to another concept but it gets an updated keeps their earning like slightly similar to a brain and so if you give it all the information and promote the entire human life it will be extremely powerful in what it knows and what is possible and what are also care about and everything so I feel like this is a huge thing and particularly when we think of all the dystopia as a precise problem I think we’re getting really close to one there I suppose it’s true that if an AI is really as good as were imagining it is then it could do worse imitation and just take your phone calls for you right it could compose your emails for you it could lead your whole life for you if it’s that good at any waiting and it’s a pain in your needs then really I guess that raises two issues first is does that present an avenue for humans to be replaced which I think is sort of what you’re talking about there or maybe you’re more distracting about the visceral creepiness of this whole idea but the second thing for me is the ethical implications and AI that we’re making to be so smart as to be able to copy the actions of the human masters it seems like that’s an ethical because if you’ve made something smart enough to be able to copy its master’s actions it seems like by definition that it must be as intelligent as its masters and therefore in slaving it to you know to be following around one person throughout their lives it doesn’t seem ethical exactly so it was not just saying Christmas for the group in the sake but particularly for that sake where you think of the exactly the ethics involved because it’s somebody that you know like it gets like emotional you like literally emotionally involved in your life basically although that sounds let’s say gets involved in your emotional life maybe but also in terms of when will it start to like want to like I’m thinking of this movie her I think it’s called See that one yeah yeah where like at some point. Well actually that’s a very the article is basically talking about anyway but I get some point it will understand what else there is to reach in life what else they are two important things in life or problems so you know like like a few like they I would sump and say OK but this I’m just like so the words problems because I know enough and I know how to solve lives and then it will be gone or votes of so I feel like we get into the realm of yeah dystopian science fiction we think about an AI that does degenerate things as a consequence of being asked to emotionally involved itself in its natural life I think that leaves to simplistic outcomes that are fascinating I suppose but nevertheless I find it more interesting to think about an AI that we do in fact have complete control over and then now we’re just exploring the possibility space of how we design an AI in such a way that it really does satisfy all these check all these boxes that we want to check off so yeah I guess in my mind the question is Is it possible to have a personal assistant that is as good as we wanted to be without being Cynthia and therefore have a nice problems and the answer that I’m not sure I’m not sure of the answer that question is I mean this is getting slightly off topic but if people in the future as I strongly predict they will start to prefer to mate with robots rather than other humans because robots will be able to match them on preferences and be more attractive etc I wonder if again it will be possible to create robots that are just sentience enough to provide for the experience that a human will need to fall in love with that robot but not so sentience that it’s unethical to have them basically imprisoned there as a slave attached to that human kind of a similar ethical quandary that has absolutely maybe just to I’m an optimist.
Tenuously an optimist that we can solve this problem and a pessimist about how complicated humans are because I think that humans can be fooled. Called by a substance the AI and that will be sufficient for our purpose and. That is so funny just just so I read.
So one top of the Asus and one goal which is like intrinsically programmed into it would be to make your life better you can say that and so what’s interesting is that making your life better comes off relative meaning for example if you want to get that job position you don’t need to be the best in the world but better than every other and so yeah I could start for example just as an addition to earlier what a creep or dangerous My starts instead of making yourself putting yourself in a better position putting others in worse positions I don’t know maybe like attacking their AI assistants hotel or whatever that. Is always one of us in our sleep psychic and like smell danger I guess you’re right him and that’s right if we’re into the teeth with each AI’s that are just making us do as well as possible then I can certainly imagine the horrible at the cold cases that for sure yeah I mean if someone has.
One of us is like a mafia boss like a whole Mafia behind us like crack in crack and me used to make sure that we succeed Yeah it seems like a problem oh yeah I was just thinking first day comes AI assistant and then there comes like a I wall or you know like leader and then your personal fighter but also hope they can a similar fashion actually it will sell within the AI assistant a basic who will get a whole board of AI’s with you in they’ll be like the AI company structure off the person that is you of the company that is for your personality.
But of them know what’s actually go ahead yeah one of the you had nano assist to go in and that I already saw one thing that people often try to say when it comes to these like ethical when it gets out of control or might get out of control is that if you let robots do something that is something. We’re really bad at like computing or something that has separated from our lives enough so that we can actively control when to use them and it’s a question that might be actually a good question to ask do we want to go right ahead into that scenario that this guy suggests it or do we rather want to have the assistance were there really high quality but in complete control off the of the person and not like listening to every thought but more to you like on the take up about that helps you and if you don’t want it to help you it won’t so if you detach the responsibilities of the person or have a I ship Well certainly you want you want the person to have supremacy or sovereignty over their own actions like you wouldn’t want the assistant to have the final say otherwise then we reverse the relationship that the AI is now the master now it becomes a completely potential estate situation I’m only casting about for these analogies and I want to go back to what you were saying about battling AI’s dueling AI’s your AI is out there your assistant is out there in the world making you look good by making everyone else look worse or something you know you can think of much more nefarious situations were a lot like you know a terrorist or someone bent to do something really bad for the world and hurt people and kill them to tension I can see them you know they could have an AI and then turn off their ethical sub routines and have that AI you know magnify that person’s power immensely right they could do all kinds of terrible damage to the world and the answer that comes back usually when people talk about these sorts of malicious AI’s comes in the form of a few things one would be like regulation some kind of big heavy handed government regulation that says people are allowed to possess certain technologies but that seems impossible in a world where you know it’s just source code right and it’s just computing power seems impossible to be able to restrict it’s not like your radio or we can track it we can’t really stop the proliferation of artificial intelligence technology unless we really have a totalitarian government and we really stop and even then I. I think we could really do with so the second solution presents itself as the only viable one which is basically yeah battle of AI’s so you might have some terrorist with an AI but then you just need the good guys to have a stronger one that can defeat that one and in such a world it makes me think just how important it’s going to be and it’s going to become a basic necessity for every human being to have an AI that is their personal advocate because it’s going to be this hyper complicated world awash in fast moving trends and things moving around and you know corporate structures and all kinds of things just moving so quickly that no human will be able to process the complexity and at that point we will be dependent on our intelligent agents to advocate on our behalf and to negotiate this complicated world that is the risen and it will be a necessity that’s.
If you like right now my thoughts are in a very interesting Future World and I really like them or of what you are saying about having the strong say I am bell up the ice and actually that brings a suspect to where we are today because today it’s already a lot of it’s like who has the most computing power. Is having actually the most power now in terms of companies in terms of data which is the most valuable assets so I will be interesting how that works out now in terms of more active knowledge itself well I just have one last thing on that because it really bothers me when people talk about let’s say Twitter for example now there’s been a big brouhaha about how there is a lot of harassment going on Twitter Well I think that complaining about that and saying it’s a terrible thing is really going at it from the wrong perspective because people first of all are anonymous on Twitter typically or they can be any way and they’re able to broadcast their message whenever they want so it’s just human nature that there’s going to be terrible affair like just awful harassing messages that are going to be on so complaining that these messages it’s. Asking that the messages be stopped at the source I.E. you know banning free speech basically I feel like it’s the wrong approach instead if we all had intelligent agents we would have the power individually to determine what reality presents itself to our audience so for example if I never want to hear the word fuck ever again I could instruct my intelligent agent to scrub the world of that word so it never passes before my X. or if I hate Nazis and I don’t like the idea of Naziism which may be a normal thing. Maybe you want to scrub that from the world and the sickly what I’m saying is this brings the whole notion of censorship into a highly customized realm where individuals can decide for themselves what is appropriate and what is not and it takes away this whole idea that we need to centralize censorship or we need to censor things at the source and I think that makes for a more free world it will be very interesting to figure out how that extends should be in order to make it actually useful and not and dangerous or or not so what I often think has changed over the last time because you say it’s just human nature and yet it is human nature and yet for example what we see with everything that voting habits of people is also a human issue I think very often the major thing that has changed over the last decades is the efficiency of for information like information so you get everything very quickly and so this and to come back to the censorship thing which all this extremely related we kind of have this already and it’s not called censorship but it’s like it’s suggesting you what you want to hear is suggesting is something that you want to hear and I’m talking about going about this Facebook problems because that’s kind of censorship because I don’t want to read like right wing news for example so they spoke not censor them for me but not suggest for me you know like a strict sense yet and as I was also saying earlier with I’m not a fan of restricting access informational. Or like restricting how much Iris children should make in their childhood I also think here that always exposing a person with different ideas is always a good thing I mean that’s a personal preference and how i’m so my box should take notice but generally I don’t know I think there’s also an injury that there’s a great danger that will all fall into our silos and never come out if you can control information like that that’s your and that’s a great point and it remains to be seen I mean I guess if we look at current trends that is the direction we’re going if you know we’re thinking about extrapolating it seems like it’s only going to get worse so maybe the answer is disempowering governments in terms of what they have control over our lives so that these kinds of political decisions that are being made by more and more extreme can if still really have much of an impact on the world if all government is doing is meeting out justice it’s not as consequential a thing if people are making bad political decisions whereas if government is really large and providing health care and education and redistributing income in huge ways then it becomes much much more important that people get political decisions correct I feel like that will be in the guts and topic to discuss at some point the thing about No I feel like we always talk very liberal prison very liberal Sam points in our discussions but there’s very elementary empirically you can see that if you try to apply pure liberalism that sounds so great at the Erie you always head into problems through quickly and I feel like very often we have these liberal ideas the question is what will cover men be in the future will we run always again in the same fallacies that this liberalism provides or is that something that might be at some points exclude I don’t know like probably that we always for every major technology that impacts all of hugely some kind of regulations like social media as one of the most important things right now so that’s related but by company so all that work that might be an interesting thing to discuss and also have your say I got to like our silo so that’s a good. With social media is interesting a social Yes I’m supposed to be social So bring us all closer together but eventually what happens is that everybody just says in their silo in the saddle at their phone because everybody over there that’s so you know I guess relationships are changing pretty drastically and it will be a silo where you’re with everybody and alone I guess but I see in the broad sweep of developments in the next twenty thirty years is it seems to me like almost all these technologies we’re talking about AI assistants three D. printing the law change technologies we were talking about all these technologies seem to be pointing in the direction of greater D.C. centralizing they seem to be pointing in the direction of greater self-sufficiency and it seems to me like that has implications for what kind of political system is sustainable or is stable in a world where this kind of technology is available to everyone and it’s true though I am disheartened if that is the goal and that is the goal I want is a more decentralized society it is disheartening to think of the direction that governments in the developed world seem to be going which is certainly not in the direction of greater decentralized so yeah hard to say I think it will be a chaotic playing out of these trends over the next decades of course of that always and so yes well I have to keep watching the sky it’s for everything that I think the role of government regulation of the future to our topic this although I feel like at least partly I think was a book or something or what nations we discussed the sorority when I was in Argentina I think.
That’s true yeah but OK So should we do with the fish well or will it be boring Well I do have a pitch but theoretically with two people we cannot determine a winner. That’s not how I’m about to vote for yourself hard enough all right so let’s let’s get into it OK to discuss roots in this elevator pitch Michael and I will give a business idea in less than thirty seconds and will vote. Yeah that’s it that’s again Michael OK my business idea at a high beer goggles we’ve heard of beer goggles when you drink beer makes everyone around you look more attractive so of imagining an augmented reality set up where you walk around and the goggles simply make everyone look better all the time so in this way you’re able it just makes it more pleasant world out there because you know everyone looks better but also open it comes time to date somebody you have access to a far greater number of people that you can date because everyone looks fantastic they’ll look like supermodels.
Who actually was thinking while you were saying they would go in there from direction but I like this that are actually better but I will quickly sketch what direction I thought it would go to. So I thought you would save something about turning interior values to all exterior values so I was thinking that you were saying about it Alliance people that have similar values too and make them or similar interests or something but I’ll be like You’re dangerous when it comes to making make me people more oddly like you don’t mean the sex with the make the more I would but in your world they would see more of the relatively but don’t share the same values so we will be because again like we were talking earlier about we want this Gus with people who want you to shape all that up in Unix talk.
And it’s all those people there’s always the barrier that they have different opinions and then there’s also the barrier that they look so I don’t know that might be like very like this this follows this person about yeah but but I feel like the if you combine this with the idea of the personal assistant who knows everything about you the person with the thing can play cupid on a completely different playing field and normally is available because usually were constrained by completely random configurations of cartilage in people’s faces right like there might be a perfect soulmate out there for you but you know she’s hideous or something so instead be I can simply fix that all up for you and make sure that. To guide you subtly in the direction you know what would be best for you in terms of personality act compatibility that’s even better than I thought it would be like oh shit honey I forgot to put my glasses on oh no it’s better.
Yet don’t forget.
It’s only a good run to aisle we were on to.
So you know how Google has ZERO use the data on traffic to inform you of traffic jams is another huge problem which is part of the parking situations so many of us like a Google traffic alert or reading aloud areas visualizing with different colors the parking situation so you want to arrive somewhere if you want to know it when should I start looking for a parking spot and it might be in very like neatly integrated For example if you Nol put in a location for example a restaurant in Google Maps and navigate there it will tell you shop might be close with your rifle Mycoskie when you arrive so that’s an extremely relevant information and that could be something Google will tell you then so just like the typical walking distance from your parking spot there should be I don’t know this amount of meters and give a hoot Oh OK that might make sense to pay more parking or to occur earlier or to yeah exactly that just to see how the idea came up I can imagine. It’s slightly more funny actually than. Just finding parking I was a complete driving always spam car sharing service there was just a really short route I wanted to do so I got into it like I locked the car with my phone and I wanted to go there and find parking and then eventually I did not hype argument that ended up at the same exact spot where I left so I saw that basically meant half an hour and like fifteen bucks paid for the really just a waste of time and so just to at the last future which I. Miss like some anomaly that detections if you know that there’s concert. Or something which was the case. Also said OK That’s it yeah I can only imagine how frustrating that must have been and that sounds like an absolutely brilliant idea now my interpretation of your idea is more specific basically if you could just tell you what are the free parking spots and where to go to get that in itself would be absolutely great imagine you press your phone you ask it what is the nearest free parking spot and it tells you where you go because if they get it has access to all that every car is driving on the road it has access to like video camera information or whatever amount of data you can imagine it would need for that to answer that question just imagine I think then I think there’s already solutions in that direction kind of and I guess a problem with that is I think I was like thinking about it more from a business model perspective and how would you be able to implement that I’m like well how do you deal with the party with the priest parking spots you can’t sell from the to every user of the app you can come up with some kind of supply and demand model or something I don’t know but doubt if you like that look I mean I bought the general density of like three parking spots I think that something might be extremely provide value already but be really easy to implement and the other thing like this if I start censoring Yeah you know I’m thinking maybe the idea lies with the city if I misspell a T. maybe it could be a revenue generator if they take all their free spots and then make it available an app with dynamic pricing but still very cheap but dynamic pricing so that they’re making a small amount of money from it but it depends on what day and then you can book it easily on the phone anyway it is a completely different idea so. You know that’s I think action separates us like some kind of supply and demand model from the city for parking but I like the idea that combines this with like an exact that a geisha to a prepackaged because then you have something much more value that’s Yeah because if the shoe leather cost like you said right you spent thirty minutes driving around trying to find a spy that was closer and I was the big waste of time so if there was some app that could prove. That human waste of time there is something I heard something somewhere that like at certain times of day in New York City like thirty percent of the cars on the road are just looking for parking and imagine if you could take those off the road imagine all the benefits Yeah exactly so yeah it will not only provide value to customers who like people looking for a parking spot I mean but it has general traffic so for the other participants of the trip and also for the environment because people might even take a trip if the app is warning them that there’s no spots and that they’re probably going to spend this much time looking for what they might not have.
Some other method so that’s more information that’s something that’s fun but anyway so we go to the voting part OK sounds good so now you and I have to cancel each other out because I’m going to vote for you and you’re going to vote for me has the rules require I’m going to suggest that Daniel win this because your idea is far less software that might So let’s the most all of them know I think I would I would if I had my you did your idea I would vote for you right you know so so I probably just say I’m so sorry like that we could do is just like some kind of tie like that I guess the winner today but two totally great ideas I can’t mention any better ones OK so there was support today Thanks everyone I really enjoyed it I pull the listeners and thought it was well planned to have something more to say Michel Oh that’s just great conversation one of our better ones I think and I had a great time. Have a good night you know it’s a good thing for you it’s like OK Have a good night at the later.
thing (44) parking spot (7) intelligent agent (5) personal assistant (6) political decisions (3) human nature (3) business idea (2) complete control (2) good night (2) free parking (2) computing power (2) human beings (3) size fits (2) dynamic pricing (2) emotionally involved (2) human kind (2) big data (2) people talk (2) parking situation (2) ethical implications (2) basically talking (2) lifelong relationship (2) good thing (2) parent (26) technology (18)