Artificial Intelligence (AI) is not off in the distant future… in some ways it is already here. How is AI already changing our lives? Does it work independently of us or does it also have our all-too-human biases? After hashing out machine learning on Episode 19, Brian and Fedor sit down with Picasa founder and serial tech entrepreneur Lars Perkins to discuss AI in broader strokes on this special edition of Miles To Go.
Fedor Kossakovski: Thank you so much for joining us one more time. This is a special edition of Miles To Go that we call Hash It Out, with me, Fedor Kossakovski. I do some writing, producing, web editing for Miles.
Brian Truglio: And I’m Brian Truglio, senior editor and producer also for Miles O’Brien.
Fedor Kossakovski: And what we try to do in these is take a science or tech topic that we are usually covering for the PBS NewsHour or for a film and try to figure out what’s important about it, really try to hash it out. This topic that we’ve been discussing right now is AI, machine learning. The past episode we talked about machine learning specifically and we spoke to our fellow producer Cameron Hickey who together with Miles O’Brien put together this series for the PBS NewsHour about junk news and how it proliferates on social media, on the internet in general, and the issues around that. But one of the things that we found very interesting was a kernel in the story that needed more discussion we thought was machine learning. And when we previously asked for topics to cover, we had Steven Gammon on Twitter who has since disappeared so we can’t read his tweet, but who asked us to look into AI algorithms and try to explain what is going on and why they’re important
Brian Truglio: It would be funny if Steven turned out to be an AI algorithm.
Fedor Kossakovski: Well very very self-aware so yeah very meta. We talked about it for a while, Brian and I, about how were going to tackle this because we thought was a great idea and we were really happy to look into it. But as we usually do ran down too many rabbit holes and we had too much to talk about. Brian started reading a book about it of course. And if you want to hear about machine learning, which is how we divided we did in machine learning first and now we’re going to do AI generally and the issues and interests applications surrounding AI, but if you listen to that and listen to the previous episode.
Brian Truglio: Let’s talk about first I guess the relationship between machine learning and AI. How did the two of them fit together?
Fedor Kossakovski: Basically machine learning and AI right now often you hear them interchangeably as terms when people write about it or you hear them on the radio or in the news or whatever but really machine learning is a subset of artificial intelligence or AI. Machine learning is just one of several flavors of approaching how computer programs can rewrite themselves and learn from past experience with usually the direction of humans which I think is going to be a crucial part we’re going to get into. This is hard to understand in the abstract but when you start thinking about it specifically one of the biggest machine learning algorithms that we all interact with on a daily basis is the Facebook machine learning programs that figure out what to show you–the News Feed–which is what we covered in the previous episode. But I was looking through our previous interviews that Miles has done and one thing that popped out to me was a conversation he had with Dan Zigmond, who is the head of News Feed analytics for Facebook. Zigmond said that they have algorithms that working with humans that can determine terrible stuff that should never show up on their News Feed: pornography, sexual violence, terrorists’ beheading videos all that kind of stuff. That stuff gets deleted outright. But then there’s this harder thing to classify of junky news, clickbait, poorly-sourced stuff, hyper-partisan, you know all this stuff that I think is a bigger threat and a bigger in some ways a bigger issue. And so one thing that they brought up in the interview was that they have this machine learning algorithm to determine what people should see or not on top of their News Feeds. He says “when we use ranking as a way of kind of reducing someone’s distribution it means that the people who really want to see it most will still have it there. They can still find it. It’s not being taken out of the public square but it’s not being sort of pushed on people who may not actually want to see that kind of material.” And another quote he had was, “over time we learn about each person’s kind of preferences and their likelihood to do different kinds of things on the site.” So basically they’re learning from users’ experience, figuring out what kind of content they will like or won’t like, and then feeding them those posts that would garner the most engagement for them right and hence selling ads as well. But an interesting thing that comes up is there have been instances reported that certain Facebook pages get shut down and then they complain that it’s because they’re conservative, most prominently Diamond and Silk. And I think there’s a real question to be asked about is this because of a bias that is encoded by the mostly I would assume liberal coders that are working for Facebook by that training data that we discussed last time of, you know, you give it something and you tell it what it is at the end–is it coming from that or is it from the algorithm itself?
Brian Truglio: This is something that Zuckerberg and Facebook readily admit. And Zuckerberg even in his testimony to Congress he’s like look we have to admit most people in the valley in that industry tend to be the more liberal side of the spectrum. So we don’t even have to say it could be a possibility. I think they’re kind of admitting it’s a possibility. Right.
Fedor Kossakovski: Right. And I think that’s a good point because Zuckerberg did bring that up in his testimony is that they’re not a media company. They don’t make editorial decisions. They’re kind of pawning it off on well it’s not really editorial because it’s the computer kind of choosing this for you. Maybe it’s something inherent in the way that conservative clickbait differs from liberal clickbait, which Miles covered that at length in his interviews with one of the writers of a lot of this junk news, most prolific stuff, Cyrus Massoumi. He says that there seems to be kind of an arms outreach race for conservative content right. So maybe there is something about the conservative content itself that is flagged by the machine learning algorithm. But I think the crucial thing here is that we don’t know. Right. And I’m worried that Facebook doesn’t even know right because like we discussed in the previous episode it’s kind of a black box. You train this algorithm and then looking at the code you can’t always tell how or why it’s classifying certain things certain ways. Or maybe for a smaller algorithm but something so big that Facebook uses? Who knows. And I asked a few researchers about this who think about this stuff and I got some very interesting replies. I think two I wanted to highlight on opposite ends of the spectrum. One was from Ben Wagner. He’s a director of the Privacy & Sustainable Computing Lab at the Vienna University of Economics and Business. And I asked him does Facebook need to, should they allow people to review their algorithm? Is there some sort of regulation that should be done? He said quote “They need to be much more transparent about how their News Feed works to a wide variety of actors,” so really release this stuff to scientists and policymakers so they can understand how it works. And he also said “take everything you see on Facebook with a bucket of salt,” which I really like that. And I think that’s good advice for anyone.
Brian Truglio: Could be the most important lesson.
Fedor Kossakovski: But interestingly I also talked to another researcher Brad Hayes. He’s an assistant professor at Colorado University doing AI research. We’ve covered him before for a piece we did on Twitter bots during the run up to the election and he built a bot on Twitter called Deep Drumpf. By feeding a machine learning algorithm transcripts of Trump speeches, he was able to make something that given a little bit of text would finish the tweet for you as if it were Trump talking. But anyway we, I was talking to him about this as well and I was interested in his response which was completely the opposite from Wagner’s response. So, Brad Hayes says “I don’t think Facebook has any obligation to tell anyone how any of its site works outside of complying with data privacy or disclosure laws. It sets a dangerous precedent if we demand tech companies to be able to explain how their technologies work as it is entirely possible that the models underlying their algorithm are essentially indescribable in any meaningful way,” which is kind of what we talked about last time doing. So he said “Considering a machine learning model such as a neural net with 10 million parameters doesn’t lend itself to an easy explanation especially it’s not even clear what a satisfactory explanation would be. Should they have to explain what each parameter does? If they try to summarize it but do so imprecisely, are they misleading the public?” And he also says “it’s very difficult to come up with satisfying rules for these kinds of situations and so I don’t think it’s reasonable to say that they have any kind of obligation to be transparent. I do think it would be good practice for them to disclose rules they’ve imposed on their system to achieve special behavior. But anything else seems too unreasonable.” Which I see that point, like I see where they set up these huge systems and once you train them you don’t even quite know exactly how they work. I mean that’s one of the beautiful things about machine learning is it gives you the answer when you can’t necessarily always tell how it got there but it gives you something that seems to be working and works better than people can do or classify things at times, so it brings up an interesting correlation that weirdly we’re kind of encoding these biases sometimes into the systems. And I think machine learning here in this situation is really a stand in for all of AI. The big issue that I see is just, it’s weirdly a part of us but not a part of us right. We are kind of giving we are giving it something to train on and if we don’t pick the right training data or if we don’t pick the right goals we can end up with some creepy stuff or racist stuff.
Brian Truglio: This kind of goes back to how do you fix the problem of unintentional consequences? And we brought up the concept of auditing. I like the idea of auditing because I think open source is one thing being transparent is one thing but I completely agree and I think we’ve hashed this out with Cameron last week. What you’re quoted saying there is exactly right. It isn’t a matter of just opening it up, look at the wiring, oh look at that. The wires are crossed. Yeah these wires aren’t in the right place.
Fedor Kossakovski: That’s like not meaningful for this situation.
Brian Truglio: No, it’s too complicated when you open it up and look under the hood you don’t really see anything that makes sense to you. However like in an auditing situation, people who work with machine learning who understand it at a deep level who understand the different techniques–we didn’t mention it last time but you know there’s basically five different schools of machine learning theory–somebody who understands this stuff at a higher level… An independent person who understands that these algorithms are proprietary, who protects that, but at the same time can go in and ask questions of the programmers, the machine learning developers: What did you feed it? What are the parameters? What are its goals? What is being rewarded for? Those kinds of questions. I think it’s pretty likely that under those conditions they can look at it and say I’m a little concerned about this. You didn’t check for that. Isn’t this going to give it this bias. You know that kind of stuff because what is real is that we are already seeing unintended consequences. We are already seeing bias. So the question is how do we control for that? I agree. I don’t think there’s a rigid rule based regulation way that we can do it.
Fedor Kossakovski: You want, you want a Blade Runner task force don’t you. You want people retiring bad programs, herding them back into correct usage. It’s such a young field and it’s there are experts that do understand it very well and can give that guidance and be those auditors. But also there’s just so much AI being used that it’s really hard for one entity or a group of people or whatever to track everything so I think it’s important for people the laypeople to understand how this kind of stuff works and there’s so many misconceptions. We were talking about a story that caught our eye where these two Facebook chat bots developed their own language. And we thought that was super cool and whatnot but then you know came back a few days later with a Snopes article kind of debunking or putting that into perspective and I think you know kind of shows how misconstrued this AI application stuff is.
Brian Truglio: So it turns out that it was one of the two top trending stories of July 2017 according to Snopes and the other one was a warning from tech entrepreneur Elon Musk that artificial intelligence or AI poses an existential threat to human civilization. Right so this story paired with that one and I think people confuse it with that one and so the story was that Facebook had to shut down an AI experiment because two chat bots developed their own language.
Fedor Kossakovski: That sounds creepy yeah, it sounds scary and creepy yeah.
Brian Truglio: You know it immediately brings me back to 2001 like the scene where like Hal is in one room and–
Fedor Kossakovski: He’s reading their lips?
Brian Truglio: He’s reading their lips.
Fedor Kossakovski: Okay great, bring up movies that I know. Don’t bring up The Graduate, which I still haven’t seen.
Brian Truglio: So this story which which I had heard and thought was pretty cool and you know shared it with a couple of parties probably.
Fedor Kossakovski: Yea, we talked about it.
Brian Truglio: It turns out that Snopes debunked it and when they did the research and when they talked to the actual people at Facebook who developed these two chat bots as it were it actually wasn’t the story at all. What’s revealing is that I think the true story actually says something more interesting about machine learning. I’ll just give you a few details so in a report that was published the day before Musk gave his speech delivered a fascinating account of the it was called the FAIR team, which stands for Facebook’s Artificial Intelligence Research team, it gave an account of their experiment and basically it gives a little snippet of the language that they discovered between these two bots were called Bob and Alice. So Bob says “I can can I I everything else.” And Alice says “balls have zero to me to me to me to me to me to me to me to me to me to.” Which I may have left out a “to me” or “to”. In any case you get the idea. Sorta sounds like English but it’s off enough that it’s creepy.
Fedor Kossakovski: It’s the uncanny valley of English right now.
Brian Truglio: And it sounds like complete nonsense to us but basically these two bots were actually designed as machine learning negotiating algorithms. So the ultimate goal is to create that bots that they could negotiate with human beings not with each other.
Fedor Kossakovski: You mean negotiate like just like travel agent or whatever?
Brian Truglio: They’re trying to see if they could get an AI algorithm that would get the best deal possible.
Fedor Kossakovski: Oh, a literal negotiation.
Brian Truglio: But basically negotiating is incredibly abstract thing.
Fedor Kossakovski: I’m still terrible at it so maybe I could take some pointers.
Brian Truglio: I’m absolutely awful as well. You could hire Bob or Alice. What they realized is they saw all of this very strange language and what they realized is that they were not rewarding the machine learning robots for sticking to English. They forgot to put that in their right. So they forgot to say alighn it to that human goal of like the one thing you can’t do is violate grammatical English or something like that. And so what happened was they actually just started developing a shorthand language. Now what’s fascinating to me is that this is something that humans do because language is ever evolving and we do tend to evolve you know shorthand language over time.
Fedor Kossakovski: Yeah I’m thinking texting and stuff like that like that. I mean that would look like gibberish if you had shown that to someone 50 years ago, ten years ago.
Brian Truglio: The only thing is that of course with humans now we’re talking about like you and I can develop some kind of shorthand language. But for it to be used widely it would have to be accepted by hundreds, thousands, let’s say millions of people then have to adapt. So language evolves slowly but it does evolve. When you only have to a AI algorithms.
Fedor Kossakovski: And are doing it much faster than yeah.
Brian Truglio: And they’re doing it way way way faster than humans. What you’re watching is kind of language evolve in like nanoseconds between two users who don’t have to worry about.
Fedor Kossakovski: Making sense to anyone else.
Brian Truglio: A larger army of chat bots that they have to communicate with.
Fedor Kossakovski: So it’s a bit misleading to say.
Brian Truglio: It is definitely misleading. And also they didn’t shut it down. They basically just corrected that reward system and the other thing is it’s not unknown in fact it’s quite the opposite. This is a very this is a frequent occurrence.
Fedor Kossakovski: Well seems like how are you going get the right answer if you don’t even know how to ask the question.
Brian Truglio: You give something a very narrow purpose and if you don’t tell them to stick to grammatical English then there’s lots of little words and articles that they might not need that might just get in the way that take more time. Right. So they just start paring them off and paring them off to get to what the essential words are for them to understand.
Fedor Kossakovski: Yeah I think this is very indicative of the power and peril of AI. So very powerful systems can quickly evolve to address problems or trying to figure out negotiating tactics or whatever. Also if you don’t know how to set it up it’s not very useful and it can be harmful. So I think it is a good chance to really get someone who’s a little bit more knowledgeable in this than us to weigh in. And we turn to a friend of MOBProd who’s actually it was on the first ever episode of this podcast: Lars Perkins. He’s a tech entrepreneur, investor with a lot of experience in this field. He founded and took Picasa, the image editing software, to Google and worked at Google for a while and now basically does investment and consulting for various tech startups, a lot of which employ AI. And we thought that even though he’s not an academic expert in this or a researcher in this, we thought it would be very helpful to get someone’s perspective who actually is not only thinking about how it works but what is the benefit? What is the future of AI? And also what are potential pitfalls? Lars you say you’ve been focusing a lot on trying to figure out how AI is changing the landscape of technology. Could you give us a little bit of background on that?
Lars Perkins: Yeah so I have been involved in a lot of technology startups and did Picasa which was digital photography software that was sold to Google so I was kind of around Google when some of these advances were enabled in artificial intelligence. And then more recently I’ve been involved with the venture fund here in Los Angeles and it’s just seeing an enormous number of businesses that are being enabled because of the advances in AI and then other businesses and even industries that are being transformed because of the breakthroughs really that have occurred over the last decade.
Brian Truglio: We in a previous podcast talked a lot about machine learning. Can you explain machine learning and AI, how they’re related?
Lars Perkins: Well sure. First of all I guess I should make a disclaimer. I’m no expert in this. I’m not an academic. I’ve approached it purely from a pragmatic perspective like what’s really happening in the world? How does it change the way that we live? And what financial opportunities come out of it in terms of businesses? And so when we talk about machine learning and artificial intelligence I guess the first thing is we have to agree on our terms and artificial intelligence has two words and the more controversial of the two is intelligence in terms of trying to come up with an accepted definition. But I sort of think of it as intelligence is whatever it is that a moth has more than a rock of and a mouse has more than a moth and a chimpanzee has more than a mouse and a human has more than a chimpanzee. So it’s hard to define but it manifests itself in sort of goal oriented behavior that’s influenced by stimulus from many different directions you know sort of achieving objectives that are more complicated than just flying towards a light but ranging from flying towards a light to flying towards that big white disk in the sky which we call the Moon. So that’s intelligence and then the artificial part I think is less controversial and we could think of it as basically for purposes of this conversation as being manmade and I think mostly we’re concerned with kind of silicon based contraptions here even though there have been attempts at artificially intelligent things over the years that are purely mechanical and some speculation is how we might use substrates other than silicon going forward to create the types of intelligence. Machine learning is a mechanism by which we make the machines that we create behave in intelligent ways. So if we step back from machine learning as a technique prior to machine learning techniques that have burst onto the scene as being practically possible over the past ten years we have sort of more techniques that are based more algorithmically more on algorithms and probably the best example that I could think of would be chess games. Computers playing chess have been around for decades but before machine learning they were based largely on trying to codify the knowledge that human chess players have into a set of rules that a computer could execute. So we sort of say let’s create a way of sort of evaluating what the relative advantage of the two players is based on the position on the boards and then OK if I make this move do I have a superior position and okay if I make this move maybe the other person will probably make this move. How many moves ahead can I look? And then what does the score of the board look like at that point in time? So it was all kind of trying to develop all these rules that allow the computer to mimic the performance of the chess players. Machine learning actually means creating sort of a meta algorithm that allows the computer to learn from data and create its own set of rules for how to behave based on the interpretation of the data and the understanding of the the objective, the goal that we’ve set for the system. And this is a technique that’s really enabled over the past years for a bunch of reasons, Moore’s Law being one of them.
Fedor Kossakovski: And let’s get into that a little because I think that’s an interesting intersection with what you have worked on and know very well is image editing and image processing right through pounding Picasa and working on that. And it seems like one of the big recent booms of technology that is hardware wise allowed this to accelerate is GPUs versus CPUs right, using graphics cards to handle large amounts of data and you can really run machine learning algorithms on graphics cards. Am I characterizing that correctly? Is this an important part or is this just like any kind of better chip faster chip would have done the same?
Lars Perkins: There are classes of problems and image recognition or object recognition from images is one of this class of problems that involve a lot of learning. I have to show you hundreds of thousands of pictures of cats and hope that the learning algorithm is clever enough that it now sees what distinguishes a picture as a cat and it from a picture that doesn’t have a cat in it. And that kind of processing that kind of learning where you’re extracting features from the images and then correlating those features with a particular object requires an enormous amount of processing power. So rather than writing a smart program or writing say okay a cat looks like this. It has ears, ears are these things. It has fur and it has whiskers and trying to create an ever more detailed description of what a cat is, we show the machine learning system–and in this case I’m describing a deep learning system which is a subset of machine learning–lots and lots of pictures of cats and we let it figure it out. And in order to do that we need lots and lots of computing power and in particular we need parallel computing power. So if when I start about this having been enabled over the last decade it means going back to the time that I was at Google. Google was creating the world’s largest network of computers and at that time they had some six figure number of computers it was kept very secret. But it was by all accounts the largest collection of networked computers belonging to a single organization on the planet and probably still is. Now in more recent years GPUs as you talked about, the graphic processing units that have been developed in order to do parallel processing to enable virtual reality and ever more realistic videogames which require a similar type of simultaneous or parallel computing, have evolved and those have become a way that we can now have that kind of massively parallel computing capability with 2000 cores on my 1080i on my computer that’s underneath my desk. So we don’t need huge data centers although they still have a leg up but we can at least do some subset of these kinds of deep learning training and algorithms on machines that are accessible to individuals now. So yeah GPUs have enabled that.
Brian Truglio: So you mentioned a deep learning is what’s the difference between deep learning and machine learning?
Lars Perkins: Well deep learning is a kind of machine learning. So when I say deep learning were largely talking about what’s called CNNs or convolutional neural networks which take information that consists of lots of data points and synthesises rules for understanding that data sort of on its own. Another type of machine learning that’s not deep learning necessarily would be it’s called a support vector machine where we might take a dataset that has a finite number of characteristics. For example I’m working with a company now that is doing the analysis of audio of heartbeats and attempting to discern whether or not this is a normal heartbeat or a not normal heartbeat. So from the heartbeat you might look at a lot of different characteristics of it. You might say okay this is the frequency of repetition and the sequence of audio sounds. This is the number of individual sounds that occur in that repetition sequence, in other words the different beats. Here are the components of the beats with the amplitude of each of these subcomponent of beats. We subtract away background noise. So essentially we’re let’s say hypothetically we’re creating oh I don’t know 20 or to 50 to 100 different pieces of information about the heartbeat that we’re looking at or listening to. So we might feed that into a another type of machine learning algorithm that’s not deep learning and say all right I’m going to show you a thousand heartbeats and 500 of them are normal and 500 of them are not. So I want you to learn by looking at these 50 or 100 pieces of data what constitutes a normal heartbeat and what’s an abnormal heartbeat. So there we’re looking at sort of finite structure data about the the heartbeat. Where is the image processing image recognition which needs the more advanced deep learning techniques, you never say, well you know what if the thirty seventh pixel on the fourth thousandth line is white, that’s probably a good indicator that it’s a cat. Right. You have to abstract things way way way way back from that to see the bigger trends and that’s a different type of learning. But deep learning built off of big data are kind of like the areas that are just accelerated and made things like Alexa and Siri possible both in terms of speech generation and in speech comprehension. So they’re getting a lot of press now but they are only built on one type of machine learning. There are others.
Fedor Kossakovski: Gotcha. And so you mentioned a few examples of applications right, heartbeat monitoring stuff or probably for us familiar Siri and Alexa. What are other useful applications is there specific type of application or type of use that AI is very amenable to be used for? Or is it just the explosion of data that’s out there for every field has made it indispensable for every field?
Lars Perkins: Well I think it’s closer to the latter. As we gathered big data on all sorts of different phenomena. I mean you take health care for example. You know with every aspect of a patient being recorded, you look at a vast population of patients, you look at their blood chemistries, and their EKGs, and diagnoses made by physicians using conventional techniques and treatment modalities and outcomes. You can you understand you can take tons of that type of data, feed it into one of these machines, and you ask it, OK these patients are good outcomes. These patients didn’t have good outcomes. Figure out why. And of course they are you could expect AI to come back and tell you you know what I’ve noticed that when someone has high blood pressure and you give them this drug they are less likely to have complications of this type ultimately. And of course the exciting the health care is when we integrate genetic information, the genome, into that, and you can begin to discern patterns of why particular drugs are effective with some people and not with others. And those those patterns are hugely difficult for an individual to detect because of the quantity of data and it’s sort of uniquely suited to the capabilities of these intelligent algorithms. Others you know obviously with financial data you’re always going to have finance you know in there because even now the trading desk in many cases the trading role on on the floor has been almost they did. Now because the vast majority of trading is done algorithmically based on data that has been analyzed and rules that have been devised by these deep learning technologies so health care finance education. Each of those of course has hundreds and hundreds of applications in each one of those in healthcare alone. There’s going to be hundreds of applications. I mentioned the heartbeat but you also have I think it’s at the Mayo Clinic where doctors are actually now using AI to help inform their treatment decisions using just the types of analysis techniques that I was discussing. So you could almost pick any industry. We think about self driving cars the ability to recognize an environment and respond intelligently to the environment is also enabled by these deep learning techniques in a way that just was not possible 50 or 60 years ago when we tried to use algorithmic non statistical techniques to guide behavior of autonomous vehicles. So it just you just see this just across almost any field that you can you could think of. And the implications in society I think are huge. Are there companies right now that are trying to create AI doctors or have that as a as a goal. I’m seeing kind of this at the periphery so I’m not involved in the you know the big initiatives that are out there I’m seeing some startups where we can deploy a relatively modest amount of capital for a particular vertical application like a heartbeat analysis or there’s a dozen other ones that are similar but I think there are larger initiatives but they’ve been mostly guided rounds of augmenting the skills of the physician rather than a complete sort of autonomous doctor who’s going to make and implement treatment decisions. But it does allow the doctor. Now I think I think it’s in the case of the Mayo Clinic to reference information that’s derived from big data about patients when making treatment decisions. And over time you can imagine that would progress you know go upmarket so as to enable people with less training in less developed parts of the world to make better treatment decisions because theses treatment decisions are going to be augmented by these AIs. But I don’t think medical school’s going away in the near future.
Fedor Kossakovski: I have kind of a question off of that. Especially I think in the healthcare realm this is an important thing to consider when we’re talking about using AI. I’m worried about the biases that could get coded into these systems based on the data that is pumped into them to train them. I’m just like imagining you know this is just you know me just imagining now for example some system where you’re trying to determine how to allocate certain medicine better and based on the information that’s coming in maybe minority groups or underprivileged people have a less you know because of other circumstances have worse outcomes. And if this could get codified into the system of OK well maybe then the system says it’s now worth giving these people medicine because it’s harder for that amount of medicine to help that same person. I’m just you know this is just an extreme example but I’m wondering if we do allow these computers to learn and set up their own parameters and then draw conclusions from it’s like a black box that we’ve, Brian and I’ve discussed before, is like a black box they put stuff into, you’re not quite sure exactly what is going on there, and then you get a result on the other side. And the more and more we rely on it is that is that an issue is that something that we have to be worried about? Or do you think there are ways around it?
Lars Perkins: Both. There are huge issues to be considered. And I believe there will be ways around it. But you can imagine that if it was purely the hospital administration that was designed the learning algorithm they might think that the ultimate goal was some combination of health and payment.
Fedor Kossakovski: Right. That’s exactly what I’m saying. I think that’s a really scary thing to consider.
Lars Perkins: And if that was implemented then yes you would find that you were having adverse impact on groups that were socioeconomically disadvantaged. So that’s a huge problem. And exactly that kind of problem surfaced when this type of system was used to evaluate recidivism rates. In people I believe who were being… Determining whether they should be eligible for bail.
Fedor Kossakovski: Oh it was parole or bail or something yeah, I remember this.
Lars Perkins: Yes. And with the parameters that were set up with that in that particular case they found out there was just a problematic correlation between race and the recommendation that was made by the system. So you have to really design the objectives the learning objectives with the system, what are you really trying to optimize for? And if you create the wrong goals you are likely to program implicit biases that may not have been intentional from the beginning. There’s a funny there’s another example that’s a lot funnier which is I think they looked at the decisions that were made by judges in Israel and determined, for comparable crimes, offenders sentence much more leniently if they were sentenced just after lunch. So you did way better and spent less time in prison if you if you could get on the docket right after the judge had had his midday meal.
Fedor Kossakovski: Yeah sounds like it sounds like you’re supercharging the kind of Freakonomics of the world with these AI systems if you let them run wild, if you don’t really think about it while you’re making the system in the beginning.
Lars Perkins: Hugely important. I mean this is what you know this thing gets into the whole kind of next level of concern about AIs which is are we going to have a robot rebellion? Is Westworld season 2 on the horizon for us? And this is Nick Bostrom’s book Superintelligence you know talks about the hypothetical paperclip machine that if you were a paperclip business and you want to create a machine that is absolutely focused on creating paperclips at the lowest possible cost, you know don’t be surprised when it starts eating your atoms in order to make paperclips. You can easily imagine that you could reach a world which has lots and lots of paperclips but no one who needs them. And that’s called the alignment problem and the challenge is how do we make sure that the goals that we’re setting are truly aligned with ours? And it’s sort of the you know it’s the classic Midas problem, you know. I want everything I touched turned to gold and that didn’t work out too well. So you have to be very careful about what goal you’re optimizing for.
Fedor Kossakovski: It sounds like we’re having those discussions like we’re starting to have those discussions now with these implicit bias kind of questions of machine learning algorithms. I mean we need to be having them on a wider scale about AI in general because I think it’s happened so quickly because it’s evolving so fast. But are we having this discussion or is it gone the right way?
Lars Perkins: I have to provide you know again the disclaimer that I’m looking at this from a narrow point of view of what smaller businesses are being created out there by talented entrepreneurs that provide an investment opportunity. So I haven’t–there are people out there that have been far far deeper involved with the philosophical implications of this and I’m sort of a very you know layperson bystander and don’t sort of have intimate knowledge as to how these issues are being dealt with at the societal level. But I will say people that I respect who are talking about this are for the most part quite concerned. They divide up into a couple camps in terms of the Elon Musk, the greatest existential threat to humanity is artificial intelligence, on one side and then people who are… See it much more in a positive light like Ray Kurzweil and those folks who say well you know that’s silly to worry about that because we can always unplug it. Which I think is a bit naive.
Fedor Kossakovski: Yeah I would say so too.
Brian Truglio: Is there a way to kind of audit these systems or create test facilities or or something some way that we could create an independent way to put AI under extreme conditions and test results?
Lars Perkins: I think there are ways to do that. The question is who’s going to take responsibility for doing that? Right now it’s you know completely unregulated and is driven by profit motive. So if solely driven by the profit motive you’re going to experience some people who do it better and some people who do worse. And we hope the people that do it worse can fail in ways that aren’t an existential threat to humanity. I would say we’re going to have you know, there’ll be a few more self driving cars running over people before society really takes a hard look at how to keep this genie in an appropriately sized box. But I do think you know autonomous vehicles are likely to become on of the first test cases for how closely this type of technology needs to be regulated quality control tested et cetera.
Brian Truglio: Where do you fall on the spectrum if you’re going to put Elon Musk and Ray Kurzweil on the extremes I guess?
Lars Perkins: Yeah and I don’t know if I’m being entirely fair characterizing Ray in that way. I don’t know that I’m well-read enough in the field to have a strong defensible position but I think it’s one of these things that, look, even if there’s a 5 percent chance of that happening we should be very very worried. So if you have if you have the high end of Elon Musk thinking there’s like an 80 percent chance, well you don’t have to agree with him to think that we ought to be paying a lot more attention to this than we are. It’s sort of like climate change. You can be a climate change denier but are you a 100 percent climate change denier? Are you that sure that it’s completely impossible? Because even if there’s only a 1 percent chance you know it’s something that we ought to be really thinking about hard. And unfortunately you know we sort of get boxed into this black and white world. But I strongly believe that it’s something we should be paying a lot more attention to. And the one point that I’d like to make which I believe I first heard from Tristan Harris on a podcast, so it’s not original, but for decades we have science fiction novels about the robot uprisings where robots become sentient and the alignment goes wrong and they take control whether that’s Terminator or the Matrix you know or the Asimov “I, Robot” series is what happens when these intelligence get beyond our control. But I make the point that there is already an intelligence that was beyond our control and to a certain extent still is that has tremendously impacted society and not necessarily arguably maybe arguably but not that arguably in a negative way and that’s Facebook. So Facebook unleashed a machine learning algorithm working at, according to the exact principles that I talked about and set the goal to be attention and engagement with only the most minor rules like no pornography or graphic violence otherwise it would have optimized for those things. But even those most minor rules excluding that type of content it just decided to you know optimized for outrage and anger and look where that’s got us. So to a certain extent AI has already influenced the quality of our discourse, the nature of the society that we live in, and arguably has been responsible for some pretty negative societal changes. And so we ought to be worried about this a whole lot more than we are and we ought to be worrying about it not in the context necessarily of what happens when we create a machine that’s sentient or conscious or has the ability to prevent us from unplugging it but rather what about these artificially intelligent algorithms that are kind of sneaking their way into our lives in a way that appears innocuous but who may have goals that are not purely aligned with our goals as a species. Even if they’re aligned with Facebook’s goals as a company in terms of maximizing profits.
Brian Truglio: We don’t need the full Westworld experience to see that machine learning already has had such a devastating effect. Do you think that the level of alarm is it seems kind of low? It doesn’t seem like, there’s not an appropriate concern for what’s happened here. Do you think there should be more concerned about this be talking about it more?
Lars Perkins: I think we definitely should be talking more. I think that’s again where exactly I sit on the spectrum of how likely it is I’m not smart enough not to have the experience to say where. But I think we need to be talking about it a lot more than we are. And I think you know the whole Facebook fake news uproar kind of brings that into the public consciousness at some level which is good and also getting a high profile essentially celebrities like Elon talking about the existential threat that this may represent is also a good thing, but it’s got to work its way into curricula. When we’re training engineers, we need to be training them in ethics and alignment and integrating fields that were previously thought to be pretty far apart from each other but now are merging ever closer together. So yeah we should be spending a whole lot more time worrying about this than we are.
Fedor Kossakovski: The way you described intelligence in the beginning of our conversation where you said, there’s scales or it’s a you know a growing kind of list of behavioral things that you can do in response to stimulus and all that kind of stuff. I think that’s something that people, they’re so self centered and think humans are so unique in that they have intelligence but other animals are intelligent also in different ways and in certain ways machines and you know artificial intelligence at some level is already here and it’s just going to be growing and growing and growing and we are and need to be having those conversations now instead of when suddenly we have that conscious quote unquote conscious machine suddenly when it’s there. That’s not the time to be having this discussion. It’s now before that situation arises.
Lars Perkins: I completely agree. We should be paying more attention to this. It should be integrated in the way that we have discussions about technology. Much earlier in product development and discussions and it ought to be integrated into curriculum so that we don’t say oh ethics and philosophy is over there in that building; this is computer science over here. The two are merging at a rapid clip and finding people who can straddle the fence is difficult.
Brian Truglio: Looking into the future 5, 10 years from now what do you think is the most exciting or promising AI business that might be a decade away?
Lars Perkins: In my opinion the industry that has the largest potential to be transformative in a way that I believe could be universally beneficial is health care. You know so much of what we do in terms of the way that we practice medicine now is well we’ll look back on it in maybe not ten years but in 50 years or 100 years, we’ll think we were basically you know the barbers you know. We’re doing basically bloodletting now.
Brian Truglio: Leaches, yeah.
Lars Perkins: We give people medication and it doesn’t work and you say, ah well sometimes that one doesn’t work out, try this one you know. There’s got to be some underlying reason we just haven’t had the ability to understand that reason. And when you can get into treatment plans and medication plans that are a function or influenced by your genetic makeup or what other other characteristics it is that we can discover from looking at the data, I think we have the chance to transform the way that we deliver health care, hopefully lower costs and improve outcomes for everyone.
Fedor Kossakovski: Lars, we really appreciate your time to give us your nonexpert expert opinion.
Lars Perkins: My pleasure. Thanks a lot.
Brian Truglio: Thanks, Lars.
Fedor Kossakovski: That was good. That hashed out a few things for me.
Brian Truglio: Don’t forget to rate, comment, and subscribe to our newsletter at Miles To Go. Thank you all for listening. And as always please let us know if there’s any topics that you would like us to cover. And again thanks to Steven Gammon whoever you may be, chat bot or human, for suggesting the topics for these two episodes.
Fedor Kossakovski: You can shoot us suggestions at, I’m @SciFedor.
Brian Truglio: And @btruglio.
Fedor Kossakovski: Yeah and that’s Twitter. You can find us on Facebook and all that kind of stuff on our website milesobrien.com.
Brian Truglio: Thanks for listening and we’ll see you next time.
Fedor Kossakovski: See ya.
Banner image credit: Andy Kelly | Unsplash.