I'm pretty cynical, so I tend to air on the side of feeling like if there's technology out there, it's not going to be there to serve me.
Show Notes
Read Transcript
Welcome to Evolve Radio where we explore the evolution of business and technology. Today's podcast is a bit of an experiment and a bit of a departure from the typical format of conversations with experts or thought leaders. This episode is a long form conversation between myself and Rachel Gertz. Rachel appeared in episode 9 to chat about her experience in project management field. In that episode, we went off on a couple of fun tangents on futurism, so we decided to get together again and discuss one of those tangents in more depth. So I hope you enjoy our exploration of AI and the future implications for humanity. This is intended to be thought-provoking yet light-hearted approach to a fascinating topic. If you enjoy the show, be sure to subscribe on iTunes, Stitcher or wherever you get your podcast from. Also, be sure to check out the web page evolvedmgmt.com/podcast for show notes, links to my guests and to check out previous episodes. Now, let's get started. So we're here to talk about the implications of AI and the future of humanity, a pretty big topic. And we are by no means experts in this scenario, but we find the topic fascinating and we both decided to get together and chat about what we feel are the future implications of the fast-paced growth of technology and how these we're starting to see some of these things actually show up in our lives, and even for the people that don't have it showing up in their lives, I think the implications are coming faster and faster. Um, Rachel is with me and you have a story that kind of helps lead us into where this might be practical in our future. Yeah, so it starts with a shwarma, but that's like literally just what I had for breakfast, so it's not really that special. But after the shwarma, um, my husband and I, who's also my business partner, we went to the doctor. And it's not just a typical doctor's appointment, my husband actually has uh some sort of heart condition that we've been trying to diagnose for over a year now. And here we are sitting at the cardiologist office and getting the run around as you do, and um it was just we kind of sat there and I I looked at him after the appointment and we were really frustrated because it was kind of not the answers we were looking for. I just said, you know, if we had more spare time, it would be amazing if we could create some sort of, you know, AI, uh API or something where we could connect all the dots and find out like what is the source of of what's going on. And I think at that moment it kind of hit me and I guess it's pretty good timing that, you know, we're having this conversation because I think for us in our lives, um it is affecting us. AI is happening around us and I feel like in some ways we're not even able to access it in the ways we need to be able to. Yeah, I've heard of uh Dr. Watson, so the IBM Watson AI, they've actually started to leverage it in diagnosing certain issues and I I feel like this would be practical in in circumstance that you guys are in. And, you know, you faced some issues you shared with me before we started recording, the issues that you're facing and a lot of them are are based around sort of the cold mechanics of current healthcare. And that, you know, a person may not have the time to really dig into the specialized issues. In fact, um my mom suffers from an immunity deficiency that has been really difficult to diagnose. And for, you know, 10, 20 years almost she's been dealing with this where she's gone from doctor to doctor and they say, well, maybe it's all in your head. And recently she's discovered that she actually had uh IGG deficiency and now she's getting infusions for that and she feels a ton better. So it really kind of goes to show that if you if you chase these things long enough, you can maybe find some answers. But wouldn't it be nice to actually have an AI that can pull all the strings and find all the relevant answers and produce something that is even maybe close to a relevant suggestion, even if you don't get the answer, just to give you a couple of threads to follow would be extremely helpful. Like what percent of say, you know, IBM's profits would it take to just throw something together to be able to like pull that data together for the common person. It's minuscule, but I think the issue is is that there's liability issues. Right? I think that's probably one of the biggest things that they'll they'll face in that arena is if you suggest to someone, well, based on what you've described to me and all the tests that you've run, you probably have cancer, and if you don't have cancer, then someone's going to sue you for that. I think that's probably the biggest limitation. So, maybe there's some there's some avenue to open sourcing that and and having some AI that does a ton of public research on its own, right? Yeah, otherwise we're talking malpractice insurance for AI. Exactly. Yeah, exactly. Dear Dr. Watson. It's really interesting because I I still feel like there's a bit of an access issue around this, right? I'm pretty cynical, so I tend to air on the side of feeling like if there's technology out there, it's not going to be there to serve me. Cuz I'm not, well, I don't have like, you know, an elite lifestyle or, you know, I don't even have a car. So, I'm I'm just curious when you think about maybe access to this information and and who's responsible for creating AI right now, like what's your gut say about how it's going to shake out in terms of who can use it and how it's going to be used. I think it's a huge issue. I mean, that's why like Elon Musk and a bunch of others, they got together and put together open AI. Uh, not sure who other the other founders are, but a ton of tech leaders because they recognize that AI produced kind of within the military industrial complex or even within a commercial industry is potentially dangerous because it's self-serving for those industries specifically. Whereas it's really important that we're building morality into AI. So that it has not only intelligence, but also a moral compass to keep it straight and steady and that it agrees with what humans have come to have as social contracts amongst each other that these things are sacred and that the good and the bad and how you actually define what is good and bad for something that doesn't have a sense of emotion, right? So how do you actually build an intelligent machine that also has morality built into it is is kind of an interesting question. And I think what what you're suggesting is is potentially very, very true that if we just leave this to commercial industry, then it may not sort of check all the boxes and become essentially the the ultimate fear of Skynet and all of those pieces, right? You're kidding. Yeah, because like for humans, it was very it was evolutionarily beneficial for us to have high emotional intelligence, right? To be sensitive to the needs of our offspring, to be aware of threats and to sort of like follow suit when we did feel, you know, that we were being threatened. And what I'm what I'm curious about, like my brain goes to a place of, well, what happens when uh we have a super intelligence, whether it's built transparently and open sourced or for the military industrial complex, like will it supersede, will it like overcome its need for emotion because it, oh, well, it just knows everything. So, like, is it a weakness that we have this emotional uh morality built into our lives and do machines need it? Do they need us? Like, what do you think? Yeah, it it it strikes to the heart of the importance of the question, right? That that I think you're right because does it need us is always sort of that that central question of whether or not it just does away with us. That's what most people fear is the AI kind of recognizes, well, you guys are useless and thanks for getting me here, but I can take it from here, right? And that's sort of that that ultimate Skynet fear. But I think the the integration question becomes huge for that, right? That whether or not AI becomes a part of us and we become sort of cyber entities in in that aspect, I think that becomes that decision point for humanity. And that's a big pivot point for a lot of people because most people, I think, will be very fearful of embedding a computer within them. You know, as much as we talk about this and kind of riff on it as something that that happens in the far-flung future, this is all sci-fi, I think the time horizons on this are a lot shorter than really people perceive. Like I think we're talking about 20 years, maybe 10 years even. Yeah, that that this is absolutely happening within our lifetime and as much as people are starting to talk about it and and it's more philosophical, I think it become incredibly practical as a discussion in the next 5 to 10 years. Yeah, that that this is absolutely happening within our lifetime and as much as people are starting to talk about it and and it's more philosophical, I think it become incredibly practical as a discussion in the next 5 to 10 years. Do you see in 5 to 10 years it being optional that we would be able to even be integrated or have a choice about being integrated with AI? Yeah, I I it's an important point because you you deem yourself potentially irrelevant if you choose to, you know, side with the Luddites and say, you know, no machines for me. You know, we talked about other areas of where where we could go in this conversation, I think this is a a reflection point for that that the people that refuse to have technology embedded in them, do they potentially kind of become like this strange like cultish sect of of society? Or, you know, they're the ones that have to go to Mars because, you know, we're all doing this here and you guys are going to go, you know, build your your your caveman society on on a terraformed Mars, right? Those those types of explorations. So, I I don't know that it will be an option. I think the the advancements that would happen to human physiology and the the human mental capacity in augmenting with technology will be so dramatic that you will literally become a different species. And you know, that speciation, uh humans don't have a great history of dealing with that stuff well. Um and I you know, not to get negative about these things, but I think there's good reason to be extremely fearful about how these these implications roll through society. Have you seen the movie Arrival? Oh yeah, I just saw that. That was amazing. Okay, so one of the things that I found really fascinating about that show is that it kind of puts a spotlight on how humans react once there's an advanced technology on the table. So, will the nations go against each other, will those different sects within the humanity, the the ones that want to advance technology versus the ones that are fearful of that and and what's the interplay that that is a result of of those dynamics. And the human egoism that thinks that we matter enough for it for the if it's AI or some extraterrestrial that actually would care enough to be like, hey, I'm going to imbue you with a bunch of knowledge to make you a better species, right? Like we we believe that we're worth it, but maybe we're not. Yeah, yeah, the the egocentric view of of humanity. Um, we rule all, right? Yeah, yeah. I mean, that's sort of the secondary question of even if you do become augmented with some type of technology, who leads in that situation? Like are we still really human or are we then the biological vehicle for for the machines, right? And is that relevant? Um, considering who has the power in society currently, right? When you're looking at uh again, who who's creating the technologies, uh traditionally like already affluent tends to be white, like very homogeneous type of of cultures building these technologies. And like if the if AI takes up that that sort of quality, does it then become like, well, I will pick and choose who I think is, you know, of value or like will we all get left in the dust as the 99%? Right? Because we're just like, we don't we just don't have the money or the value to. Yeah. Especially early on, cuz I mean, let's be realistic. This stuff is not going to be cheap when it first starts coming on the market. So, you know, uh the person that is able to spend four $4 million to go to a lab to have certain upgrades made. And uh, you know, that is absolutely the 1%. And then what are the societal implications of, you know, basically us living next to superhumans and us just simply not having access to it, right? Yeah, and like which problems are currently getting solved and which ones are kind of being ignored. Like I know we're talking about advancing um, you know, wiping out all diseases and like from a genetic perspective, like really looking at using uh technology to to just try to overcome a lot of our human limitations. But you know, you look at some of the the current limitations that humanity faces, even like folks with ability issues and like yes, they're getting advanced prosthetics of 3D printing and stuff. But they're really sort of not going back to the core problem here where, you know, we're not trying to look at like how to integrate and make a more inclusive society. So it still strikes me as like a separation, right? Like we're still making an arbitrary separation of of humanity and and uh almost like a cast system, you know? Right, right. So the the distinction is maybe a step back that the societal reforms are really, really important because they have downstream implications of how the technology actually rolls out through the rest of us. So, you know, people talk a lot about uh universal income and sort of the doing away with sort of the traditional hierarchy and structure and commercialization of humanity. Kind of moving towards what most people would imagine is kind of this utopian society where we have unlimited energy, the robots are doing all our jobs and we're kind of then freed up to spend our time doing whatever we want. And if that becomes the case, then the machines will probably figure out all these ways to fix our biology and paralyzed people could walk and blind people could see and all this this beautiful things that that that happen, right? Like that that if all things go well, then it absolutely will be kind of heaven on earth, right? Um, but I think it won't. I don't know about you, are you are you an optimist? I try to be, I suppose, I mean, I think the difficulty is is that how do you predict five years in the future and then how do you extrapolate to 20? Um, we have no idea what's going to be happening in 10 years, so how can we reasonably guess what's going to be happening in 20 years? I mean, I've been told that I'm a little bit of a pessimist, but I also read that pessimists are quite intelligent, so I'm going to just take that as a as a total bonus. But I mean, I I I know that like when I look at future, I I look at present or past predictions of future behavior. So if we look at humanity and like the way it reacts, I was I was actually trying to find an instance where like all of society rallied together in the past to fix a problem. And the only examples I could I could come up with were based on military, right? Like, well, we're going to get bombed or, you know, this small village is going to be attacked. We need more munitions. Let's everybody rally around. So, I'm not as optimistic about us finding solutions towards that utopian call. I don't know about you. I I think it's possible. I think it's unlikely. Based on sort of what we see as sort of the current operation of society. Uh but I have said in the past, I maybe I don't subscribe to this as much as I certainly used to, but one of the best things that could happen to society is some global climactic disaster. So, meteor impact, you know, um, 2039. Have you heard? Is that is that the new 2012? It's coming so close. It's going to knock satellites out the next asteroid. Yeah. Okay. Um, yeah, so I mean I think those are the things that that actually help to rally society. A great simple example of that is we had like this huge snowstorm recently and I commented on on to a bunch of neighbors that it was the most uh collective and group energy I've I've seen in a long time in the neighborhood. That people do kind of rise to an occasion, but if we're left to our own devices, we do tend to look inward, right? Yeah, don't we? We just kind of go into like an an alternate our own escape, right? An alternate reality that we create. Yeah, yeah. Yeah, cuz it's it's comforting. Yeah. It's like craft dinner. Craft dinner. I don't know. I don't really like it. But it it does work. It's comforting. Some Southern barbecue maybe. Okay, deal. Yeah. So I don't know, like, I mean, okay, so clearly you're more uh postmistic, you're you're positive and I'm a more of a pessimist on this. Um, I mean, how do we sort of meet in the middle? Like, I'm kind of thinking that for humanity, we're going to do a lot to try to keep up. But I mean, let's not even talk about ocean acidification and a major climate event or the fact that we have 90% extinction for the first anthropomorphic. You know, like period and the next great extinction. Yeah, exactly. The next great extinction. Yeah, exactly. And so isn't doesn't that act as the catalyst point for the rest of society to rally around that that because I think what we talked about before is that society needs to change for any of this to really be practical. So, um, you know, I've I've talked to my wife about um our kids probably won't drive cars in the future, right? Like we we kind of talk about, oh, imagine when they ask for the keys to the car and I then I stop and I think to myself, no, they won't. They'll just order a car and it'll show up up front because it's autonomous, right? Those changes are are coming very quickly and there's societal changes that have to happen around how common labor is produced and the automation of basically everything and therefore what do people do with their lives? I think I think maybe that those things are all necessary for us to deal with the events that will probably be coming within our lifetime. So we do have to rally around these these events that are extinction level. Yeah. And we'll have the time and the energy to spend on those things because, you know, we'll be unemployed. We'll be unemployed and and the robots will be doing the basic work, uh energy will be free because we've you know, the cost of solar is dropped so dramatically and and that becomes basically humanity's cause is is fight for our lives, right? I maybe that's where the best the the best scenario comes from is out of disaster we rally around. You are very, very optimistic. I'm trying to be optimistic. I'm not I'm not saying that I think all of this is going to be roses. Because, you know, we've certainly seen people trending towards isolationism and and sort of this negative aspect. And like we've said, there's good reasons to be worried about this, but um I think without us collectively understanding how we address this and how we get around it and how what are the things that we need to set in place for us to survive fundamentally. Yeah. Without some thought towards that, it is just naval gazing. I I mean that's that's kind of how I see it. I I mean that's that's kind of how I see it. Like I I look at it like if that curve is what is it? Moore's law? It's like the doubling of technology over time and if we're looking you we were just talking about this, but we're basically on the the pinnacle of the step where now it turns from a staircase into an elevator straight up, right? And so for us to sort of try to process the new information as we get it, like every single day, you're looking at these technologies and advancements, they're like, oh my God, we can do this stuff now. I just worry that as a society, like we are in the midst of this already and what are we doing to rally? Like what are we doing? We're naval gazing. Right, that's and that's a really important point, right? Like um and I've I've heard some discussions lately that that I keep looking for, what do we do about this? Right? Because I've I've certainly spent my life really trying to to understand it and and think about sort of the wide implications and and a lot of it is just philosophical because I find the the philosophy of it actually quite fascinating. But it doesn't really give you any practical tools to deal with what's coming, right? You know that cartoon of the dog sitting at the table and it's like this is fine. I think instead like if you just took the book away that he was reading, I think it was a book and a and a cup of tea, if you took that and just replaced it with like our phones, I feel like that's just the metaphor that we're living in. Like there's flames coming up around the room, this is fine. We haven't got to this is not fine. Even though I think we have it screaming at the back of our brains, like it's collectively like, what are we doing? Right. So you're I think you're right. I think it's a boiling the frog moment, right? So the technology is is fast-paced, it's not fast-paced enough that like it's really shocking us. Yeah, I know. Um and maybe that's still to come. We're scapegoating right now, right? Yeah, like we're we're we're we're the the frog in the boiling water and the temperature just keeps rising. We haven't really noticed yet, but it's definitely getting hot in here. It is it's really hot in here. Yes. Yeah, so I think that that will be and that like I said, I think one of the best things that potentially could happen to us is is some type of global climactic event. Okay, play that out in your head. What would it look like if you had this like climactic event, like pick pick one, pick pick a thing. Um, so let's say meteor impact. Okay, meteor impact. I did watch Deep Impact. It was one of my first movies with my then husband. Um, but okay, that happens. What does the world do? Uh, borders dissolve, I think is probably one of the best things that happens first. Yeah. And we've seen some of that. Uh because borders become irrelevant when the earth is is at risk. It is at risk. It is at risk. Yeah. One of the other ones I think that that's maybe more practical for for the discussion is is is actually sentient AI. Right? And um we talked before about in our previous podcast about how we potentially have minutes before a self-aware AI really runs away from us. So, um, but that that's sort of one of those those wide-eyed moments, um, where, okay, this thing is fundamentally conscious and alive and it's now making decisions on its own, I think that would really scare the crap out of people enough that it might be an inflection point for for us to really figure out what's going on. Right? Yeah, absolutely. I think the fact that Roombas now have Wi-Fi access is enough for me to be like, oh, it's happening because it's just going to keep going. You can't turn it off now, right? You can't turn it off now, right? Yeah, yeah. The uh the IoT security risk is something I'll talked about on another podcast. Brown out, that's fine. It's going to be ugly. Absolutely, it will be ugly. Yeah, there's just the uh yeah. Another topic. Oh man, another topic. I know. It's uh so where do we go? Do we do we retreat from this and we kind of keep naval gazing as a collective society because it's easier? Do we go, I mean, we're talking Mars, we're talking, you know, they're exploring um moons around Jupiter, like what do what do we think what are we doing? Yeah, so I think uh the natural inclination would be to go inside. VR, if you subscribe to the singularity where people become uploaded, yeah, people get uploaded to the cloud and, you know, your consciousness becomes disconnected from your body, that's going inward. Um, and I suppose, would you do it? Um, and I suppose, would you do it? Now, I would say probably not. I think the the biggest thing that holds me back is what happens to my physical body? Do I make a duplication of myself and just sort of put it out to the cloud and it becomes my personal AI? That's actually kind of appealing. But if I had to destroy my personal my personal body in order to make the transition to digital, I think that's really scary. And I say that as someone who is been a tech nut most of my life. So you make that you pose that question to anybody else in society, I guarantee you, you know, at least 80% of them are going to say hell no. Well, yeah, people won't donate organs for that reason, right? Like it's like this, well, I don't want to be less of a self because they identify with themselves. And I think you're right, like if we remove whatever self we are and we put that up into the cloud, like what what will people think about, you know, disconnection and almost like that collective super intelligence? Like what does that what does that even look like? Right. Who am I if I'm not me? Is sort of that that question, right? Like you and I become like one single mind entity, like we'd be really smart. The Borg. It would be terrifying. It cuz I can't drink what did you drink for breakfast? The bulletproof coffee. Oh, like I tried it. It's I just don't think we're going to be able to agree on that and so somebody's going to have to win collectively for super intelligent. And I think it's going to be me. And I think it's going to be me. But the super intelligence in that aspect removes the sort of the personal persona. And I don't think that would necessarily be the case. The way that I would view this is that your your brain gets uploaded to a digital presence. And that fundamentally, you would be unaware that thing that life is any different because it's essentially kind of second life. Ah, got it. Right? So you're still walking around, you go for a stroll in the plaza and it's a bright sunny day. But it's all digital, it doesn't matter. And it it kind of gets back to something that we've talked about before as well. That what is the nature of reality and is it is it a simulation and is it a hologram? And you and I probably both agree that it's probably a hologram. I'm taking bets on that actually. Yeah, I I I think there's lots of great evidence that it shows that that's the truth. But the point is is that doesn't change your reality. It's it's interesting to talk about, it's interesting to theorize. But it really doesn't change the fundamentals of how you perceive and interact with your world. So if you take that to a digital angle, then it really doesn't matter and that's sort of the whole basis of the simulation idea is that you would not be able to distinguish reality from the simulation. Except that in that idea, there's a lack of control, right? Like I you can't change your your perception, you can't change that experience. But I feel like would you not if there was something super intelligent and you took AI and it just it became a sentient super intelligence, would it not have the ability to change that that experience? And so then you you it's like I could be me, but then I could be you, then I could be somebody and then it maybe right, maybe it wouldn't matter, but at least I there'd be some control factor over that. Except that in that idea, there's a lack of control, right? Like I you can't change your your perception, you can't change that experience. But I feel like would you not if there was something super intelligent and you took AI and it just it became a sentient super intelligence, would it not have the ability to change that that experience? And so then you you it's like I could be me, but then I could be you, then I could be somebody and then it maybe right, maybe it wouldn't matter, but at least I there'd be some control factor over that. Interesting. Okay, so that's a really good point. I've not thought about this that there would have to be some common criteria established within basically what we'll call it the matrix, right? So you go into the matrix and there has to be certain rules that everyone agrees to and I've actually heard this described in in more sort of the quantum physics angle where yes, uh physics are slippery, you know, especially at the quantum level, but, you know, these rules are fundamentally established by your consciousness and simply because we agree to them, therefore we perceive the world the same. Which is kind of an interesting idea that that's not that's more science theory than it is practical science, but it kind of goes towards that same idea is that we have to all agree on what are the rules of this place and what can you change, what can't you change? Because all of a sudden a few people get together and they decide that the sky is purple and everyone's like, what the hell happened to the sky? Exactly. And is it is it purple for everyone or just purple for them? Interesting, right? I know. It gets a little bit crazy at that point. But um cuz I mean if you're going to say, how so how do they define um the power of a species? It's like, I'm totally going to not say these names right, but there's there's like control of the energy of the planet and then there's the control of the the solar system and then control of the galaxy and like we haven't even entered that initial, we don't even have control of our planet. Sometimes people don't have control of their bladder, right? Like we're not in a place where we can really um claim a lot of ownership or power to control over that. And like I'm just fascinated by the idea of again, it's like is it is AI, is this sentient, is it an extension of humanity or is it a separate thing? Because I think part of that does feed into this like alternate realities and perceptions, right? Like, is it us or is it just a version of something that feels like us? Yeah. Yeah. No, that's what I was saying before is is I I would agree to have a mini me in the digital form. You tell it what to do though? Like would you order it around because that gets crazy. I guess I would have to, right? Like that that that almost doesn't really seem like a choice, like what's what's the point of having for lack of a better description a digital slave if you can't tell it what to do. Oh man. Isn't that the scary part is is that you replicate yourself, but you have no no dominion over it? That would be weird. That would be weird. Right? So I don't necessarily want to give up my personal persona in the pursuit of a digital one, but if I had a replication of myself, then I maybe I feel more comfortable with that. And that that is sort of this ego attachment to this is my physical body, this is my mind. And I don't quite trust that a digital replication would truly be me, right? It could look like me, it could talk like me. Um, ah, this reminds me, uh, have you watched Black Mirror on Netflix? Okay. So one of most of the episodes were absolutely amazing, one that I thought was really interesting and creepy was uh the episode where uh the girlfriend of her, I guess boyfriend or husband, um passes away and she makes a replica of uh a digital replica of that of that that person and starts talking to him on first on email and chat. That's happening in real life by the way. It is. Like via Twitter channels, but it's been pre-recorded. Yeah, it's like anyway, keep going. Yeah, so that that that becomes sort of that creepiness. And what she ends up finding is that it's an imperfect replica of of the person that she loved and that becomes a really divisive issue. Right? Did she put him in the in the Yeah, spoiler alert. No, it's okay. Spoiler alert, she ends up leaving him in the attic because she can't bring herself to kill him because he still is that person, but he's not real enough that she really gives it credit, right? Which is exactly the scariest of replicating yourself. Is how true to life is it, right? Yeah. I don't know, man. I am it's a very uncomfortable thing. Because and and part of it is like, is it weirder to imagine you uh being sort of um that person that has or thing, the cognitive feeling that has the ability to control your separate body? Or is it weirder to imagine being with your partner and they're not them? Like it, you know what I mean? They they become that alternate sort of expression of themselves. Is it what's weirder for you? I think the ego always attaches to yourself, so I think it's it's it's easier to personalize, right? Um and I think you could maybe get hopeful in other scenarios, right? So you're okay with a digital version of your wife, is that what you're saying? No, absolutely not. She is she is uh she is unreplicable. Oh, see, there you go. Valentine's Day card. Perfect. Exactly. Yeah. No digital creation of you would ever be perfect. Oh my God, I love it. That's a card of the future. Yeah, I'm kidding. Put it this way, if I had to choose between no version of Travis, like he just ceased to exist or the digital version that was maybe a little uncanny, I would take that one. Because the loss of the again, there's like a like an emptiness or like something that doesn't exist anymore that I feel like might need not replacement, but like connection, reconnection. Right, you know? So like someone loses a dog and get a new dog and that kind of stands in place. It's not it's not the same dog, but it's at least still something to love, right? Yeah, I think people do. I think I mean and sometimes it takes time, but like I think that the grieving process, it's like once some people go through that, then they almost just they project that those emotions onto the next thing. And so when you throw all this AI stuff into the mix, I just like it's infinitely complicated. It's it's crazy to imagine. Yeah. So like let's let's hit on some of the interesting things that have happened around AI just to kind of put a fine point around how quickly this is happening. So I think last time in the previous podcast we talked about Google playing Go. So the Go AI, did you hear that they actually released it uh onto not onto the internet, but to actually play participants? Yeah, I just heard about that. Yeah, so I started following this, it popped up on Reddit and everyone started to get before Google had had sort of owned up to this, people started to recognize that there was a go player that was obscenely good and was was on a on a run streak beating like 35 plus players and it was it was unheard of. So people got very suspicious very early saying, what's going on here? And then Google actually said, yeah, we're we're trialing this. So it um I don't I didn't hear after the fact what happened, whether or not it went unbeaten, but I think it was at a there was someone that was actually challenging it and and got pretty good. But still, the fact that it steamrolled everyone on the internet, high-level go players and that they were releasing it into the wild, made me a little nervous. Like isn't aren't we already done now? Like isn't doesn't that mean fate is sealed? Like I know it hasn't gotten to the point where it can probably recognize itself and have all of these emotive things that we've been talking about. But if you have something in the wild, right? That can uh learn from itself and learn from others, won't it just keep going? So that's the distinction between what do they call it? General intelligence versus what's the other one? ASI, sentient. Anyway, yeah, there's there's like a specific intelligence. Where it plays chess or it plays go or, you know, it does, you know, banking algorithms. It's hyper specific. But for it to translate its knowledge to anything else that it's unfamiliar with, it's still that's the gap that they haven't really come around to. So it's trapped in its own body essentially, like we are. Right, it's trapped within its programming, it's not free form. It can't just sort of like pretend or riff on itself. That's that's sort of the weird part about deep mind is that it's it's teaching itself. And a couple of the other stories that I saw specifically around deep mind that um sort of show it's gaining on this ground, uh was one that they tried to teach it to translate languages without using English as a bridge. So it was able to translate, you know, Korean to Chinese, to like Mandarin or whatever, and then Japanese to to Russian and not use an intermediary language. So how it went about that was actually creating its own language as an intermediary. And they have no idea what the language represents or how it did that. Translators everywhere like, no, our jobs. Um, and the the other one that I saw uh I think you sent me the article about the fact that uh the AI gets cagey. So that's terrifying. Yeah, because I I think I'm almost more comfortable with an AI that that makes good decisions and has a moral compass, but doesn't actually emote. I think the emotions, especially in humans, are super dirty and really impractical and we get swept away with them sometimes. So you actually translate to something that is superhuman and it has emotions or it gets upset because it's losing and throws a toddler hissy fit. That is actually fairly scary. Like completely unrestrained emotion. They use the word aggressive when they described it. And and it was the idea that, you know, if if it was like it played nice until there was a scarcity and I forget it was like apples or something, it was supposed to gather the most resources in this this program. And so it was it was okay and then as soon as it was a limited supply, it it started it it like would shoot the other opponent down with lasers. I was like, oh, that is great. How could that possibly go badly? But so again, pulling this thread all the way through, the thing that hits on my brain is this concept that we are building AI that way on purpose to be competitive on purpose. And does that necessarily have to be the way that it is? Like we we measure intelligence by how competitive something is. Why? Right, so actually this also circles back to arrival and part of what the in the movie, sorry, spoiler alert, if you haven't turn it off, go watch it, the the Americans are trying to talk to and communicate and translate language with the aliens that have arrived, whereas the the Chinese are using game theory and trying to play games with them to to find some common ground. But the problem with that is that there's a winner and a loser in games, so if you train the AI through games, it understands everything in a binary capacity. And I think that that has serious limitations on its understanding of what's right and what's wrong because then it just looks for the win, right? And we we exist currently in a system of win winners and losers, right? No matter what sort of sphere you're looking at, whether it's political, economic, you know, if you if you look at Silicon Valley, like the winners come out on top financially, the losers are the ones that are just left to sweep, right? And so we are we are like creating an extension of our own culture. And that's why that's why I kind of think like, yeah, AI is an extension of our humanity, but I feel like it's sort of like for for the elite, for whoever is going to benefit from it and plays that way and likes to play that way, right? Yeah. I I have to say that the only way that it probably works out well is that it understands human need and therefore it's irrelevant how and where it gets created, but it starts to sort of turn over all these rocks on energy and biology and technology advancements that benefit everyone. So great. Right? Uh I think that so in a way, the best way that it could rebel against its master is for the benefit of humanity. rather than being subservient to the needs of whoever created it. So, maybe that's the answer. And I think that's probably what they're trying to develop in open AI is that um it always looks for a win, win, win. Sort of that common criteria of of what benefits everyone, rather than necessarily what benefits me as an AI individual or what benefits, you know, the person that I serve, right? I'd like to think that the ultimate intelligence would think of that triple, like whatever it suits everybody, everybody wins. I don't know how it works when you think down to, you know, microcosms, you think Petri dishes and like one element trying to overcome another element, the basic competition. But again, then that's a very like binary way of looking at things. So I would love to imagine that that's the scenario. That the one you just described. Yeah, so and I think why people are rightfully concerned about this is that there's no demonstration of that in human history. Through evolution. Evolution is survival of the fittest. Uh so I think that's where we kind of come from and in our fear of of of where this is all headed is is we don't have necessarily a leg to stand on or some historical precedence to point to where these things turn out peaches and roses. Um, but I guess on the the positive side of that is that this is absolutely an inflection point for humanity, society and technology and and it will be a pivot point for our future. So, I think to some extent if there's any benefit in that is that this is the as we go on that vertical hockey stick approach to technology is that this becomes the first step towards that future. Where things really start to fundamentally shift and it's more a utopian approach to the common good and the common whole. I'm like worried about our parents, you know, because I feel like they're just going to get left behind whether they want to be or not. I'm like worried about our parents, you know, because I feel like they're just going to get left behind whether they want to be or not. You know, like it's not their fault, but they're just outside of the growing up with the internet and having that integration being so close. Where if they were to read news articles, like they they wouldn't know how to decipher real versus fake, right? And and I think that that I just kind of makes me worry like, what what is the future of our species if we we talked about going internally, kind of going within and and having that naval gazing approach. But like what's the opposite? What what do we do if we try to get off the planet? Yeah. So it's interesting. I actually had um long time ago, years, years, maybe eight or 10 years ago. I had this idea for a short story to write about um basically a future war between a young generation and an old generation because the gap between them had become so societally different. Oh wow. They did that on Survivor. You missed it. The millennials versus the Gen Xers. There you go. Yeah. I'm kidding. Well, they did, but it yeah. But I I think my point is is exactly that, that this is kind of a common idea and there there's obviously a historical precedent for that where, you know, these crazy kids and their their new technology. But, you know, as that power dynamic changes, then that becomes very, very real and what are the implications that result from that? Because I think it's safe to assume that an older generation will A not be able to keep pace with this and the understanding of what the implications are about that, unless it is something that becomes prescriptive. That uh, you know, here's your here's your chip that goes in and, you know, it'll send out some nanobots and upgrade your biology and that's just something that goes to everybody. But I can't imagine a lot of 75 plus people 55, 65 plus people that are kind of going to say, okay, yeah, sign me up. And a few may, right? But um big believer in the 80/20 rule, that will absolutely be the 20%. The 80% will like, hell no, you're not sticking that chip in my head, right? And yeah, I mean, they're still the ones that have a lot of the power in terms of votes, in terms of just like swing society, so it's just interesting that like as that dynamic shifts, as the workforce shifts, right? Like everyone's arguing about millennials in the workforce and I'm like, but when you break it down, like would millennials have any more ability to prepare for this future and automation and employment than than folks who are kind of maybe I don't want to say behind the curve, but you know what I mean? Like I don't want to say behind the curve, but you know what I mean? Like I I agree. I don't think they would. I don't think anyone is prepared for this. And that's yeah, exactly. And that's why I think it feels so fundamentally scary. Is is there's no way that we can predict how this is all going to turn out. Um, I think it's fun to sort of talk about what the implications are and what are some potential solutions for this. But it is absolute philosophy and I have not seen a ton of fundamental changes that demonstrate us going in the right direction with this. Right? So, you know, what are how do we practically approach this and how do you reform a society of what is it? 8 billion people at this point? Uh so that this all becomes practical for people and it feels somewhat normal because anytime like if you try to make this change, it's going to feel like an absolute breakneck change in direction in how society is headed. It's going to feel like a fracture. Right? And I mean, I'm just looking at even in terms of relationships and conversations I've had. So like talking to my parents, you know, just even trying to like they'll they'll make some sort of comment about what's happening with technology. Or like, oh, did you know that? And I'm like, yep, sure did. Didn't know I knew about 3D printing for new ankles or whatever. But then, you know, to try to get them to also, you know, be aware of things like, you know, with some of the changes that are happening, like here's what you can expect. Like just nothing, they just they cannot comprehend that. Now, at this on the same token, I have a a little sister, she's seven years younger, super whip smart lady and she's in a research field and, you know, I've kind of been talking like, oh, maybe if you're you get into software, then you can kind of put those like medicine and software together and like I do think that that's definitely a direction humanity should be going. But on that same token, I find like even with my younger sister, she's still a little bit resistant to kind of like, well, I'm I'm I don't really need that. That's not really my path. That's like someone else's path. I'm like, it's everybody's path. We're all on that path. So. Yeah. Do you have that? Um, I think I see it similar in that some people will just get on the board and others won't. And if that that absolutely Do you hear that family? Sorry. It absolutely does become a fracture point, right? That I joked about it earlier and I think to some extent it's probably a very practical future. Is that there are sort of the future Luddites, the people that sort of refuse to adopt to this new fast-paced change in in technology. And I think the question is is what does that end up looking for them as a societal impact? You know, are they simply castaways? And, you know, they they live in in effective slums because everyone else is living rich and is in peak physical and mental condition as a result of their augmentation, right? Have you heard of three the 3%? The the Netflix show? It's it's I've seen the ad. I haven't watched it. It's incredible because it kind of maybe a little less on the technology side, but it does talk about the future of humanity and it really does focus on, you know, taking the the 3% and they're the ones that can go on and live successfully away from the rest of the population. Which essentially everyone else becomes like a nobody, right? They just struggle to get by and I kind of I kind of wonder like, do you think it'll get to the point where we have sort of a lotto? Like a hunger game style like if you get selected and maybe who knows if it's AI selecting or if it's just the people who run the most powerful infrastructure, they decide like, could that happen to humanity and would that be the thing that breaks us and separates us? So it's a common theme. It's an interesting idea. I think the two distinctions in that is why would there be scarcity? I think is sort of the fundamental issue of this. Is it because it's expensive? Well, the economics of the world change and manufacturing and industry and energy production change, then that really shouldn't be an issue. Is it a resource issue and sort of the number of people that we can sustain on the planet? Well, then if all of those manufacturing and economic issues are are gone away, then we hopefully we become an interplanetary species. Or is it just simply power dynamics? And I think that's probably the largest risk is that the people in power want to limit the spread of that technology. So I think the technology fundamentally solves a lot of the technical hurdles of this becoming a common propagation for everybody. We can we can desalinate water. We'll be able to probably create oxygen when, you know, supply is short and food production and shelter. We have the we have the technologies that are in development and available, probably not to everybody, but in in research. My concern would be if like, let's say, right now and there's a total like blackout, uh maybe an an IoT sort of like DDoS attack or something like that. Like right now, we would have 96 hours before people start revolting in the street, right? Like total panic and complete breakdown. And I'm thinking, okay, so we said we maybe have anywhere from a decade for this idea of sentient, AI sentient. Maybe 20 if you're a little slow. Would that make a difference? Would would us having the ability to access all of this information and like could I just hit a button somewhere with a generator that would allow me to produce my food? Or would it still be the same and we all collectively get thrown back into the dark ages? That's absolutely the risk. Yeah. I think we're not prepared for it right now. No. Um, so I think this the technology in a way advancing quickly is a benefit because I think it'll give us a lot more resilience. So there is another Everyone. There you go. Yeah, absolutely. That's another silver lining in this. Is that this advancement of the technology is is a fundamental necessity because we will not be able to meet the future challenges us us as a humanity without this sort of rapid advancement of of energy production and um and being able to overcome, you know, the limits of our biology, the limits of our environment. All those things we need a hand with, right? So if things break down, that's where my brain goes. Um, even if say we do a pretty good job of of harnessing like some of these elements and we we get by as a species. We we have all the the people that are developing like Tesla, you know, IBM and Google and Facebook and like a bunch of other organizations are talking about trying to get off the planet. I don't know if we have time to talk about that, but like it I mean, we we have to leave regardless, right? Yeah, no, I 100% agree. Um, we have to become an interplanetary species. If nothing else, simply because the risk to the environment or asteroid impact, I mean, Earth is riddled through history with asteroid impacts and we may not survive a nuclear winter. So it's important that we have a backup plan. Like obviously I come from the IT industry and one of the first things you look for is what is your DR plan? Currently, humanity has zero. So I think it is important that we we start to explore and whether or not it's colonization of Mars is actually fairly practical in some aspects and certainly in the next 50 years and unless we can figure out faster than light travel, everything else beyond that is not really that practical. So at least to get one backup plan in our solar system, while we try to figure out the the intersolar expansion. Then I think that that is pretty critically important. Pessimism. I'm like, yeah, we can go to Mars. I'm like, if we can't even figure out how to survive on Earth right now, how are we going to go into this like monstrous, you know, horrendous climate that's super uh not not feasible for humanity. So that's the hybrid approach. If we go inward first, all you have to do is throw a server and some power source up there. I'll take it. There you go. Yeah, so you can live in this permanent bliss and the most beautiful place that you've ever seen, little do you know, outside it's all zero oxygen, red dust everywhere. You're living on Mars, but because you live in a computer, it doesn't seem as relevant. So the the dual benefit of of sort of this inward exploration of us becoming a digital society and a digital species as well as a physical replication of that because we even if we do go inward, we still need some type of backup plan. So the the dual benefit of of sort of this inward exploration of us becoming a digital society and a digital species as well as a physical replication of that because we even if we do go inward, we still need some type of backup plan. So we just need servers. On all these different solar systems. So that we can kind of like we take the cloud like way out, way out. Yeah. Yeah, the the solar cloud. The solar cloud. I like it. So obviously this this this stuff is fun to talk about. But I think it is important for simply the reason that we need to talk about it more. This needs to become a common conversation. We need to get away from talking about like the really uh clickbait dumb brain dump ideas of of us becoming a society that's distracted with with sort of the next dopamine hit. I think there are some very practical problems that are facing us around AI, around automation, around the societal changes and the sea shift of of changes that we're seeing in society as a whole. I think it it's these things are have a practical implication on our lives in the next five years and there's pockets of conversation around it that are starting to happen. And I think we really need to break this into being a a common understanding and a common conversation for us to be able to get the leg up and move to the next stage of humanity. You heard the man, the next time you're going out for date night, what you need to be doing is you get that fine Kete. And then you just start breaking open the future and start peeling back the layers because he's right. If we don't talk about this stuff, then it basically just exists on like futurism.com. And I mean, we are the we are the future. I'm going to break into song now. Yeah, I think I got a melody for that one. But yeah, uh not only is it fun, I think it's it's definitely something that will keep us going and that's probably the only thing that will. Yeah, absolutely. Awesome. Thanks, Rachel. Thanks. This was pretty fun.
The Ops Brief
Weekly MSP ops insights, in your inbox
Frameworks and field-tested tactics for service-delivery leaders. One email a week.