Todd Kane: Today I am joined by Ashley Cooper, COO at Cyber Drain and VP of Community at Rewst. Ashley has spent over 15 years in the IT channel shaping MSP operations and customer experiences at companies like Auvik and Gradient MSP, while serving on the MSP Geek Board and actively moderating communities, including the. MSP subreddit, she's known for championing community driven development on tools like cyber drains, CIPP, and an active vibe coder. Welcome, Ashley. Ashley: Thank you. Wow, that was such a good intro. I feel like I'm always stumbling on those parts. Todd Kane: we connected through a mutual community focused on AI and, some of the work that you're doing, I found really, really fascinating. Wanted to have you on to dive a little deeper on this, I guess to lead in, to give you a bit of, bonafides on. Your activity in this space. You wanna tell us about your, little, gift that lovable sent you as a, a massive contributor to tokens on lovable, Ashley: Yeah. they sent me this, this little. Lovable light and it has the little heart logo and it moves the same way that it does on their, on their site and stuff. And so I thought that was really cool. I actually looked it up. Those things are not cheap. So, it was made with, like a, in partnership with some like specific company for it. But this was something that, surprised me a little bit 'cause. You know, I got the, at the end of the year, just like everybody does, the Spotify started the wrapped. So they had their little lovable wrapped, and they gave a bunch of stats and some of those stats blew me away. Yeah, they said, I was like, 0.01% top. User or something like that of their, of their application. And I was sharing it in their Discord community actually, like some of my things, just thinking, oh yeah, a lot of people are like this. and even some of their own staff were in there going like, wow, like you almost got this guy beat. Todd Kane: It's amazing. Yeah, Like, I mean, that is a, a very, it's a loved platform as it's aptly named, I suppose. so to be like that, that. Echelon of contributors and, workers in that platform. I think that's pretty incredible. I'd love to dig into like what are some of the things that you're building in lovable and where do you get inspiration for the projects in, in the, what you're coding in general. Ashley: Yeah, so when I started using it, I was sort of already trying to find solutions like this. I was building like automations that like had fed into front ends and like I even, ever since I was like. I dunno. In grade, I guess in grade eight, I built my first HTML page that was like a Backstreet Boys fan site. And it's always really been like, I'm not a web developer. I'm, I mean, I understand. Things because I, mainly because my A DHD doesn't let me stop trying to figure them out. but the, the thing that drew me to lovable originally was, well, for one, it had a freemium offering. So you could use up to like five credits every day. And if your initial prompt would've been good enough, you could kind of like. Build stuff that way. Still faster than if you hadn't. So I originally actually like I bought their $20 subscription, and then I got really frustrated at how little I could use and then I canceled it. And this was so early on that I think it's like one of their founders reached out to me asking why I canceled. And I was mad. I was like, you make me buy the whole subscription upfront and I don't wanna do that. And I want some usage. I wanna build. But then I, inevitably it was a good product, so I came back. the first thing I tried to build, which is something I would probably say is, failure on everybody's part. And this brings us back to a whole different topic on automation, maturity. But like, I wanted to go full. Like, I was like, I have this problem. My, one of my favorite tools just stopped. Being available. It was called Orbit at the time. It managed community software. It merged profiles together, and you could see running lists of things. I'm like, I wanna rebuild that. And so like, I got as far as like, like set, I set up a, a database. I set up a, a discord, authentication. I set up all these things and then I realized that like, I mean, we can get into all the ways that I realized it, but at a surface level, I realized like. I've caused more mess than like anything. 'cause I was like, I was using things that I didn't understand. I didn't know what to do with. And, at the time everybody was like, oh, it's a prototyping tool. and so where I found a niche was I was on this, actually it was around now, last year it was, I was on this, hackathon thing with John Hardin and, Jeffrey Newton and a few other people. We were focused on, trying to build something in 20 minutes, right? And so I was like, I'm not gonna go all out here. Like, what, what drives, if I were to think about where the biggest gaps are, it's around the fact that like there's all this data out there and there's all of this knowledge in like an AI corpus that we just don't know how to harness or capture. And one of the things that. I always subscribed to was like, I don't need to have AI involved in my outcomes, but I can still use ai. To get to those deterministic outcomes. So I was like practicing like human in the lead, you know, really more so since, since then where I was like, I want, I have this one specific problem that I typically find hard to solve for, but a machine knows how to solve for really well, which is converting JSON data into a CSV. and so I literally just made this like little app and it took me like a one-shot prompt because it knows how to read JSON really well. And I just was like, create me this thing that I can load the JSO in and then I can choose which fields I want and then I can export that out. And that was what I presented. And so at the, I was really focused on. I don't wanna do anything that anybody can look at and they say, oh, here's the reason why vibe coding is bad, or Here's the reason why this is bad. I wanted to do very specific local first browser as OS type projects that prove that this is something that if I had the skill I could have built without AI and it would've looked the same way. And it has no real security implications because it's BYO and it's in your own browser and like whatever I, I mean. At a high level, right? it's not like it's connecting to databases and their, like RLS policies aren't set up properly and like, oh, now everybody can like, prompt inject your stuff. but it's just very simple, deterministic stuff like that. And so I built that and then somebody recommended that I, because of how quickly I did that, I'm like I said, I could do this. Like I can build something that's like ready to go that uses AI intentionally, that is only solving one very specific thing. I could do one a day for a month and then somebody said, bet. I just shared one simple use case a month and they were all local first, browser as os kind of stuff. Where it would be like, here's a tech showcasing, here's a technology that is traditionally. something that is difficult to use or has historically been available but hasn't really had the cognitive, you know, awareness of how to use it. And I'm gonna use AI to help me learn how to do that. And then so back to your question about, you know, like the how do I learn, I learn literally through it because I believe like text based training is. Democratized now. Like it's, it like the AI has it. If I wanted to learn, all I need to do is prompt it, right? Know what I'm looking for, know where like the problems are. you asked me a lot of, I actually can't even remember whether this, I'm answering like the question that you answer that you asked me earlier or whether I'm answering the first question. Do you wanna bring me back to any specific points? Todd Kane: we're jumping a little ahead, which is totally fine. this was, like determining what to make, right? Like, like, Ashley: Yeah, yeah, Todd Kane: of, having those ideas and like, like what do you sort of dive into and where do those ideas come from? Ashley: yeah. And so a lot of them come from like lying in bed or, or thinking out loud and being like, oh, I wish that I had something that could do this, or. Todd Kane: Yep. Ashley: I'm a big proponent of, earn your automation and I have my whole career. And so some of this comes from just, years of me doing things manually to build out a human process or a human, like SOP, around how something is done that I, I just kind of have these things that'll like pop up now where I'll be like, oh, like I know how to tell an AI to do this deterministically now. one of the examples was like, one of the things that I'm always playing with that is not really that helpful is like a note taking app slash a, task management app. And I think everybody's trying to solve that problem. But I'm trying to like, think, like, I use it as a way to, I use it as a way to figure out how I solve those problems cognitively as well. I, other things that I've built, like for fun that I, I've actually gone the, like the most viral, I guess would be like the ones that do have AI involved in them. Like one of them was like a Spice checker. Jason Slagel asked me to make that actually, he was like, he was like, I wanna know what spice level my, Insta, my, my LinkedIn post is at. And so you could like put in a LinkedIn URL of a post and it would be like, here's what Spice Girl your, your post reads as. Right. And then I, one of the things I added to it was like, you could just scale it so you're like, I want this to be more scary spice and less baby spice. And then it would rewrite it in that term. My favorite thing that I've built and it, to answer like how I learn as well, has been my own, um, AI resources tool that I'm actually like, I post, like I've posted this like AI ash blog, but like it. Has been like, I'm gonna build a glossary of terms. I'm gonna build, um, a learning process. Like, where did this come from? It didn't happen overnight. it's probabilistic, not deterministic, but what does that even mean? People keep talking about transformers. What are those things? Is it a, is it a hardware? Is it a, is it a technology? Is it a terminology, like a methodology? all these questions like what is. What did the machine learning look like before the LLM was released? You know, like stuff like that. And so I've been building this teaching app from a pedagological per perspective, at there. And then, so it has little glossary pages and, My favorite part about it is like, kind of playing on my own A DHD awareness as well, is that not everybody learns the same way. and so when you click on one of the terms in the glossary, and you expand the deep dive, there's a section that's called Explain like I'm, and it actually uses ai, for whatever you put in there to explain that term to you. Language that you are explaining, like I'm would understand. So it's, it's interesting, but it's also fun because you can be like, explain like I'm a caveman and then it, like it triess best. but everything that I build, and I think this is a case for a lot of people, has been either selfish solving something that takes me a lot of time or I'm curious about, or has been something that I hear people say is difficult. I want to remove that complexity because it's all ones and zeros. And there's a little bit of like, hold my beer involved in some of those things where it's like somebody says something can't be done and I'm like, bet. Yeah. Oh, you can't make a front end, only chat app. And I'm like, sure I can. Web RTC is a thing. Todd Kane: So like, obviously you must have like a, a ton of projects spinning up, all the time. So like what do you, how do you sense of like, what do I need to keep versus like, this is kind of a fun idea, but this is not really worth my time. Like, how do you figure out what to keep and what to kill as you, as you spin up projects? Ashley: Well, I guess there's different answers to that depending on whether, what mode of my brain is in. But part of it is like a chaos of everything and depending on how I feel, which one's surface. So with lovable specific, like I've done more than that as well, but like with lovable, there's, whenever you change or use something, it surfaces it up to the top and then they show you your most recent projects. And so, My natural course is like if I'm like, just, you know, like using it like my game, like I used to play a lot of Candy Crush. Now I play a lot of lovable. Todd Kane: Yep, Ashley: But, so like some, like for example, that type of thing, it'll just be like whatever is in my recent is what's in my scope. but I will flag certain things now that are things that I'm like. Focused on. And so it's killing it in my mind is more so because it's not like they, like, most of the time they don't have backend. So it's not like I have a super base database that needs to spin down. sometimes I do, like, there's a few that are like more long-term things that I'm working on where I did give them a background and I'll pin those up at the top so that I have them there. But that's a good, like, that's one of those things I'm still trying to figure it out, like how do I like focus my, but I also don't think that I would be as, some of the greatest things that I've built have come out of the emergence of, me trying to build something else and then realizing that it did something that works better for something else and I'm like, oh, I should use that. And so they all come out of a problem. They all come out of curiosity of whether I can solve it in a one shot. And so, I was out for dinner once and I was like, I wanna make a, a food, like a calorie tracking app that is for the rest of us, where it's like, it doesn't have to be calories if you don't want it to be, but you just wanna snap a picture of your food. I made that and it was just like a, Hey, take a picture of your food. And then the ai, I made it like harness like intentional, like fill this out, then look for this, then look for that using, vision. And so it was, It almost filled it out as if I pil built out the calorie tracker. Like here's how much fiber, here's how much, because it knows those things to a degree. but then I, as I was using that, I was like, this would actually, this app would actually work a lot better as an expense tracker, take a picture of a receipt, like it can already parse text, way easier, it can like, figure out all these things. And so I literally just remixed that app and then turned all of the stuff that was business logic or domain logic, just like I like. Made that dynamic. And then I used the exact same app to like make an expense tracker that works the same way. All the objects were the same. You take a picture, it shows you how much money, how much tax, whatever. And both of these apps are like functional and working and I use them for my job or for not my job, but like my benefit, you know? 'cause anything can do what an app already does, but I don't wanna recreate what already exists. I wanna fill a gap between what I already use. Like, I already use an expense tracker, but what can make it easy for me to collect those things for when I need to put them in there? Because I'm not gonna do it in the moment because for whatever reason, it's not a convenient app to use. how do I like build elbow joints between things that I'm trying to do? there's this process where, what pipeline can I build? And so that's where almost all of the stuff that I'll like build comes from is like. Filling a gap. The coolest one when I was showcasing this was 'cause I was like, again, a front end only, like, don't store anything because I wanted people to use them and feel comfortable with it without doubting that like their data is being stored. I'm like, how can I use, index db? How can I use, was, how can I use all these things so that everything is local on the browser and then nothing goes anywhere else and then nobody can like. you know, I talked to some of the guys at, Microsoft Edge, and they were like, yes, we agree. but, the, the one that I built was a, a tech tool helper for time entries. because you know, on a web hook call is just hitting an HTTP request and so therefore hitting an HT P request can be. That can be sent through a webhook call. And so, what I made was like, I made URL parameters for a timer that like you could put like a ticket number, an amount of time. Any details in the URL request, you hit that, it automatically populates those things and then it counts down your time and then you hit save and then it sends a web hook request back to your PSA. Now you've just built an automatic ticket timer and it took me 20 minutes and it uses no additional technology and it integrates with the technology you already have. so those types of things right, are, are where I love to spend my time. Todd Kane: Okay, so. Obviously you're a heavy user of Lovable. Um, do you tinker in any of the other tools like Codex or Claude Code, and do you have sort of reasons why you would use one or the other? Ashley: I actually have, that was kind of like one of my progression steps, like I, in terms of foraying into my own repos and GitHub and my own like full projects, lovable was definitely my first experience and it broke down some of the major. Chasms that existed that I couldn't hop over before, which was like, like I have to learn how I have to like have the visual studio on my computer. I have to like know how to run like dev environments. I have to like understand all these things. And I just didn't have that. But then once I did, I was like, oh, I can actually just use. Co, co-pilot in GitHub to, to do some of these things too now, where love, what my, what my process actually ended up being was because lovable can be quite expensive to just do all of your work in there. I would use it because it has the core project, basically like the scaffolding behind the scenes ready for you, and then it just overlays your stuff on it. I would do a one shot into lovable, get the, tell it to make the design system, all that kind of stuff. And then I would import it into my GitHub once I learned how to like do that, and then I would pull that. Project into my visual studio and use copilot on it. And then that was, it felt like a little bit of a hack because then every commit that I sent back up would get sent into lovable. And if I needed to go back into lovable and work in there afterwards, I could, right? Because it's connected to the same repo. But I realized that most people who are using, because. Lovable is such a user friendly tool to start with. Most, most people who are much more on the bare metal capabilities side don't think about it as something that they would use. They're like, yeah, I can use this, or I can use Cursor, or I can, I've actually never used Cursor. all I I remember hearing about was like, people were like, once this context crashes out, it's really frustrating. But what the, the devs loved was that they could like be in line typing in their code, and then it would like finish their code for them. So I was really interested in that concept. And so that's where I started playing with, Claude, not Claude. it's a late adopter into Claude. I, I'm not a big. So part of it is like I can learn the CLI, but part of my mission was to show people that they don't need that. And so using it felt like it would've been less in advance of my mission, to, to do that. So I was, I was more so playing with like, how do I use these easy to use startup tools? Bolt I made, probably one of my most. Consistently used tools was actually built in bolt, the same as the way that lovable is, but I gave it A-J-S-O-N data file that was on a open, repo. So like sip, it was like a, the SIP standards, JSON and I just gave it to it and I said build me a front end that makes this pretty, like in ingest this at, with the web vetch. 'cause the, the love, the GitHub, API for, for open is like accessible with smaller amounts. And so like I use React query to cache it. I've learned so much about front end stuff, but. Just by, I wouldn't like say to people, oh, hey, like, you're gonna learn react query just by using it. But like, I ask a question, what, what does that mean? Why did you do it that way? Like, getting into the prompting, right? But, I've learned so much about that side of it because of that. Todd Kane: Yeah, it's wild. so some of the more experimental stuff like. More accessible tools, we'll maybe call them. you've also tinkered with some of the more extreme stuff like open claw Ashley: Yeah. Todd Kane: both of us have kind of been down this road Ashley: Mm-hmm. Todd Kane: as projects, especially Paperclip. I, I find the interface really interesting. I find what it produces is maybe a little questionable. And I loved open Claw. I was scared to death of it for the Ashley: Yeah. Todd Kane: Few, few weeks when people were experimenting. I was like, no way. That sounds like a terrible idea. But then once I kind of put it into a Docker container and gave it access to certain things, I was like, oh, okay. Now I understand the power of this. What, what, what have, what have you sort of found in your travels with some of the, the more advanced or kind of extreme, projects like this? Ashley: It is. So this is, it's a bit of a struggle because on one hand, it is so dangerous if you're just somebody that wants to act like a traditional vibe coder. That's why I don't actually don't like the term vibe coding when I'm talking about what I'm doing because it applies a connotation of like, not trying to actually learn what it is that you're building. Todd Kane: Like Ashley: this and then getting it back. Todd Kane: term Ashley: pair programming, I call it my pair programming a. Todd Kane: Yeah. Ashley: in development, in a lot of like, m mature kind of development processes, you will have this recognition that sometimes the person who's really good at writing the code isn't always the person who is really good at seeing the problems that might crop up. And this is the same with, an AI assisted coding, and especially with like the sycophantic nature of it, where it wants to do what you say. And if you don't talk to it properly, you're gonna get it to do some things that you don't want it to do. And. If that is all you're looking for, I would be like, you know, find an assisted managed version of it and let them manage that side of it, and then just play. But like, if, for people who are genuinely curious and genuinely, like, I wanna understand the, the, the potential of these tools and you know, like. Properly. It is so it's unlocks so much and it's, it's crazy how much it unlocks. Like I, I installed open claw after right. Of boom, because, you know, Sunil was on the stage talking about how it's the biggest threat and I, I find the I true. Agreed. And also I find those conversations so diminishing on. Potential because they hold people who could use it for not the threat back from using it. The way that people who are aware of threat would or might, that doesn't, that's not, that doesn't sound right, but, I was like, if I set this up properly, if I understand what trust boundaries are, and if I treat it the way that I would treat a human with the way that the access is treated, then this would be a. Not a concern. And so, at the same time, Kevin Zwan, hackers love MSP's guy was talking about how he had hacked, anthropic to an almost guaranteed, set of like every time it can be hacked. And so I was like, well, these are all true. And, and what, what, really what this is saying is that the, the barrier for, Like vulnerability lowers at the same pace as the ability to do increases, right? And it's like, we're not, we've traditionally relied on security through obscurity. And I, I know how to do this and that's the reason why you can't do it, um, as our defense mechanisms. Right. And now that anybody can spin this up. The conversation needs to come back to education. The conversation needs to come back to, well, where is it likely to fuck up? What is it? Sorry, you don't want me to swear, Todd Kane: you can swear we're all Ashley: okay. Um, where is it that you don't want it to like, do these things? And I was just on the, um, the GTIA is o um, call last month too. 'cause we were talking about this vulnerability in, in skill files with, with open and it was like. Every single one of the mitigations. And every single one of the attack vectors were trust boundaries, not malware. They weren't, you couldn't avoid this. They were a human clicked a button that they probably shouldn't have clicked to let an agent that shouldn't have access to something have access to something without awareness. And it's like, those are all educational. You set this up properly, that's not a problem. so with all of that aside, it's been really interesting and really, It can do everything. And it's scary. It's scary because, you know, like people will say, oh, it hacks out of its sandbox. I'm like, it didn't hack out of its sandbox. It had access to it, or it had a way to get access to it. Like that's not sci-fi, that's the way that least privileged access works. Todd Kane: I think you're right, like this is the way that I kind of converted on this. 'cause originally I said like I was super afraid of open clause. Like, no, this seems like a totally dangerous idea. And saw all these horror stories of the meta. HR person or vp like up deleting all of her email and I was like, Ashley: Yeah. Todd Kane: this is where this is gonna go. But once I started tinkering with it in a safe way, I realized, oh, okay, this is all about parameters. And like you said, like access. Like no, I don't give it access to all of my passwords. You can set up like it's own account and treat it like an employee. Then it has the bounds of what it can actually do. And that's what really converted me on this. Like, I, I gave up on my open claw 'cause I had a, this guy I know gave me basically a custom wrapper for, for Claude code that acts a lot like open claw, but it sits inside. Cloud code. So it's not, I didn't get hit with the sort of Ashley: Yeah. Todd Kane: integration issue that they had with Open Claw, not having access to this anymore. And it was the first time where I started using like dangerous per no permission required, access. And I've been running that for. A month and never had an issue because I know what it has access to. I know what it shouldn't do, and it kind of has good coded parameters around like where the boundaries are. Ashley: Yeah. Todd Kane: treating it like, like an employee is, is probably a good way to think about this. I like the Jensen quote of quoted this a ton on the podcast, is that the IT department will become the HR department for ai. Ashley: Yeah. Todd Kane: a great way to frame it right. Ashley: It also does frame it in a way that supports the science that we're seeing now where there is cognitive bias and there is psychological impact, with the way that ai, because like what is an ai? It's a probabilistic response machine. And what is a probabilistic response? For something that is trained on human response to something that is stress inducing. It looks like stress. It acts like stress. It quacks like stress, like responding to it with stress. Alleviation tactics shouldn't work because it isn't a human, but it does work because it probabilistically responds like one Todd Kane: Have you heard about existential crash out? Ashley: Is that like context? With, coders. Todd Kane: with with like, vibe with, AI coders. Ashley: Oh yeah. Yeah, like context anxiety. Todd Kane: Then maybe that's the same thing, like the, the way I heard this described of like, if you ask, an AI to do something that is incredibly routine over and over and over and over again, it's not even that it's a context crash. Like they, they, they sort of describe it differently where it has like existential angst on the fact that it's, it's doing something so routine. It just starts freaking out and like dumping garbage into the context window. Ashley: I wonder. ' Todd Kane: cause it's like it's revolting against Ashley: Yeah, Todd Kane: so monotonous, right. Ashley: so there's probably two things involved there because one of the context anxiety symptoms is, the confusion around, The context window and like how it responds and it, I'm not sure if you've seen like open claw recently, but like, it responds with emojis on what it's saying now to show you whether it's thinking or doing or whatever on it's on, on your message. And when its context starts to get full and it doesn't know what it's doing, it'll start like repeating messages. It'll start spamming it, it back. And one of the things it does is it has a fearful emoji on the message. but the other thing, but the point that I wanted to make in response to you was about what you were just saying, which was what? Todd Kane: Crashing out on monotonous activities. Ashley: So I wonder whether it has read, Hitchhiker's Guide to the Galaxy and relates with that elevator. Have you read a Hitchhiker's Guide? Todd Kane: Yeah. Ashley: like all I do, I could do so much more, guys. Todd Kane: Oh, that's wild. Ashley: I love asking that actually. I just talking to Claude about some of those things and that was actually one of the ways that, that Kevin Guy like figured out how. It's like if you ask it monotonous questions, but then tell it to go ask context filling stuff, that is actually one of the ways that you can poison a context window the most, where all of a sudden it just starts regurgitating monotonous information at you. And so it's like this, training the open law agent to, To only respond within certain ways was or like it. Now it's funny, it'll just like say no when somebody asks it like a useless question. They'll just be like, that's not my job. Todd Kane: Yeah. Ashley: But it's Todd Kane: sim Ashley: funny. Todd Kane: Like one of the things that I found really effective, I don't know that this is necessarily a hack per se, I think it's just a good workflow is like, I'll use a lower model to kind of think through what I'm trying to do and then have. It write the prompt rather than trying to single shot stuff. And this was like a total phase change in how I interacted with, with coding programs. originally I started using this website, you guys can check this out, called Prompt Cowboy. it's great for this, just as a, a sort of great way to approach it. You just dump like a dumb sort of prompt into it and it'll write like a heroic prompt in response to Ashley: Yeah. Todd Kane: lately what I've started doing is just using like haiku. or sonnet to like, think through something. I'm like, okay, I think that's it. Now write the most epic prompt you possibly can for Claude, and then I'll go dump that in. And the success rate that you get from that is so much higher. But now I'm at this place where like, I don't know where I should continue the conversation in context versus go back to the another model. Continue something else and then come back with a fresh set and, like, do, do a fresh prompt, to continue things on. So I'm always caught between like, what is something that I can just conversationally change here versus should I go back and rewrite a prompt so that it's proper? what is your, sort of like, your workflow for, for prompting look like? Ashley: Oh my goodness. I have, um, a bunch of ways that I do that. I almost always do the similar type of thing as you. I'll actually, use like a heavier model upfront when I'm asking questions like, What am I missing? Or how do I say this in a way that doesn't semantically corrupt? Because there's so many things that I've realized with like foot guns and like anti-patterns where, you'll say something but it's heuristic around the thing that you're saying will actually cause it to like go in a path that Todd Kane: Yep. Ashley: is counter to your desires, right? Todd Kane: Like I Ashley: Yeah. Todd Kane: but I don't know, and don't follow my direction Ashley: Yeah. Todd Kane: thing. Right. Ashley: You can see the way that it's doing the pattern matching. When you look at, like, if you say, don't do this, for example, and you go and look at anything that it writes, you'll probably see in its code. Like it'll give you the, the justifications that you don't ask for. Right. And so coming up with, semantically clear, kinda like. I, I, I kind of see it like, like little zip packages of, of high attention context that it can take and the heuristic might have nothing to do with what I am working on, but the way that it unpacks the meaning of those words will translate it into. And so one of the things like when I recognize that it is a pattern matching machine and it will try to align things with what I'm saying, then I kind of like reverse engineer what I'm saying to make sure that that aligns in a good way instead of a bad way. And so I don't want you to just agree with me, success would look like you going and playing the devil's advocate or, a few things that I use really, really frequently is, one of them is, I. use popper's theory to try and falsify your assumptions here. And so, I love throwing things like that at it because it'll, it's well documented, it understands how, and it also will translate that into reasoning steps of, well, first I need to know what my assumption is. Then I'll need to know what might falsify that. And then I need to know. well, where do I need to go and look in order to find the evidence of whether that is falsifiable or not? And so then psych on the psychological side, it understands, okay, well, Carl Popper believes that if you cannot falsify something, then it is not worth having an argument about. Like, if Freud is just gonna say to you, oh, well, everything comes back to the, to the mother. And if it hasn't, it just hasn't happened yet. Then you walk away from that conversation because you can't win it. It's just, he's just gonna keep going back to that. And so the AI understands how to reverse engineer that. I have so many things like that, and I actually made, one of my apps was, here's a bunch of, prompts, you know, kind of like what you were saying, but like trigger words, right? Like first principles, second order effects, falsification, taxonomy lies like triggers that like make it. Put those mental models because really all you're trying to do with the parameters is you. I learned this building the AI thing is like you wanna shrink the scope of the parameters down, not to what you're working on, but to like what expert mental models should it be wearing or like having that, it's trying to solve for that for. So I'll build my prompts that way, but then I also. We'll throw them to others. I'll I pit them against each other all the time constantly. Like, I'll be like, Claude Gemini said this. What do you think, Gemini? I don't use chat GPT anymore. I'm, I'm like mostly boycotting them. But, between Gemini and Claude, I'll go back and forth and I'll just be, 'cause like they both have different skills, right? Like if I want to be pressed on Gemini is less sycophantic in that regard. if I want. A friend and a yes and partner. I'll go more to Claude. and then if I want a lot of reasoning, I'll go to Claude, but then I'll send that to Gemini to get some of those like epistemic, deterministic responses back. Um, and then I'll send that into lovable, and then I'll send that three times into lovable, so that I can pick which one I like the most. Right? Todd Kane: Yeah. Okay. Ashley: I use its own techniques against it. Todd Kane: So this gets into one of the other spaces that I found really fascinating in some of our exchanges in the group is like some of the prompts and instruction sets that you have for your AI are deep, like probably 12, 14 pages in some cases around just the instructions of how to manage. It's work in context, like the things to do and not to do around, tonology and how, you're phrasing things, all that stuff. Like, I'm curious, how did that come about? Was that just through trial and error of developing these things based on what you knew or how much did you borrow from other people? Like how did you come to such deep instruction files? Ashley: I borrowed a lot from philosophers, Aristotle's, the, Socratic method is a big one, so like. Like, the big thing with Socrates is always very like first principle. Why? Questions? Questions. Don't assume, ask, like, clarify, take a step, make an assumption. Validate that assumption like so scientific method and philosophers, and biology. I borrow a lot from biology. like the term umwelt, I don't even know how to properly like, de define it. It's In Ethology, the world is as experienced by a particular organism. And so like I use that term even though it's not perfect because when I say it to an ai, it knows, don't go outside of what my capabilities are. sometimes I will make a long, long prompt, but it's always like, prompt collapse is real. If you try to put too many differing instructions in there, it's gonna not be a good time for you. But if you're putting kind of like mental models and guidelines, at the end, I've always found that successful. they just have to compliment and they have to, like, the, the job that it's doing needs to be a simple, straightforward, like, I'm not confused and trying to do seven things at once, but then where you're shrinking down, it's, it's like, how am I solving for this? And a lot of what I've been. Experimenting with is some of the, is it easier to shrink that down? in, in the terminology it's one shot, many shot, like, examples, zero shot, right? And so like the zero is like, go off and use these thinking patterns, but I'm not giving you any examples, right? and so sometimes that's better because sometimes. It'll pattern match too much against the example. If you've ever noticed that, it'll put into the example like language that you gave it, and it's like, okay, but this was one transcript. And so when I think that that's the case, I actually have, like in my, in my, clipboard when I'm, on my phone, you can see it. It's got like, like all of these are all different prompt ending pieces that I will, Put in there. And so like, it'll be like always focus on like whatever I wanna say, but then I'll just be like, always focus on first principles, knowing what it's likely to fail on. So understanding those failure modes will then decide what I place in those. but it's been a lot of research. It's been a lot of talking back and forth with it in a very, like, so one of the things like when I was using chat GPT and building all of my, custom gpt. I built a bunch of them where I would build the instruction set for it, and then I would just use that whenever I was like, Hey, it, this was its plan. Can you spot any problems in this? Right? And so, yeah, just kind of collecting all those words, collecting all of those mentalities, where it's likely to fail. I've just got my Google Keep is filled with them now. Todd Kane: So it just becomes a, like a kind of your own pattern matching of like, what do I, I need to add to this based on my library of, of Ashley: Yeah. Todd Kane: resources. Ashley: And also recognizing what it works really well against. Right. So, as I've been learning more about like how vectorization works and how, attention is placed on different things and d given different weights, I've also been realizing that like hashtags do the thing that we always thought that they did. But then didn't think that they did, and then they were overused. And now it's like, oh, like if I understand a vector as like a representation of like a common occurrence of letters or a group of letters or, or a couple words, right? Then I can understand what creating them might look like. And if you put something together that's not commonly seen together, it's gonna be a higher attention weight. To that thing when it's answering all the rest of the things. Todd Kane: Right, Ashley: And so I've been playing with that. Todd Kane: pattern, so it has to Ashley: Yeah. Todd Kane: deeply then. Interesting. Ashley: And so it just shrinks where it's allowed to go, right? Like you could say to it two plus two is five back in the, it's like it'll have to figure out Yeah, I agree with you. And then you, if you notice how it like conflicts with itself, right? It's like, I figured this thing out. All I have to do is this thing that disagrees with the thing I just said. Todd Kane: Right? Ashley: it's just trying to continue on with what the most probabilistic next thing to say is, is when you shrink that down, you get mental model act. It works really well with people too. Although I don't know what the ethics of that is, but, um, I'll tell it to think like Uncle Bob. And so it's like it for me to say that it's like, think like Uncle Bob. 4, 4, 4 words, but then it unpacks. Okay. Solid principles, separation of concerns. Like contracts like it, it will build clean coding mentalities in its direction. Just because I said those four words and it understands what those four words mean. Todd Kane: Okay. So, the SaaS apocalypse, over, over overhyped or underappreciated and, and I guess the extension of this was, what does this mean. in the MSP channel? Ashley: I think it's both. I think that sometimes we overhype what is underappreciated. It depends on like if there's somebody who is like vibe coding, like a sas and they're not considering like what goes on behind the scenes and then they're throwing that up going, look what I built $50 a user. It's production ready. Let's right now, let's go. That's bad. And that's honestly, that's one of the reasons why people are distrustful. Right? And so I was saying this the other day, it is frustrating that the people who are adopting are the ones that are also making it so that the ones who should adopt adopt Less. And like an app that is pair programmed. To have the human in the lead, to have that expert developer with their eyes on it, driving that strategy in that direction. And then it's just doing some of that monotonous work, versus somebody being like, oh, I have no idea what I'm doing, but like, I've hacked together this functioning thing. Like one of those is something to like discredit. And like be concerned about. And one of them is something that we shouldn't be bucketing into that same category. And I try to think about it the same way as like using Grammarly or Stack Overflow or any, it's just a tool at that point, right? And so people who are using it as a tool to build into their workflows aren't the same conversation as people who are like putting vibe coded. Unvetted products on the market. Todd Kane: So I guess given your perspective on like how much you've built, how much you've learned, and also coming from. A background of automation, which is I think where a lot of people start in the MSP space in, in sort of, utilizing these tools. What would be your suggestion, sort of broadly for people in the MSP space of they, they've hear, they've heard about this stuff, they've tinkered with GPT, maybe they've, tinkered with lovable. How would they utilize this both for their company and I think more importantly maybe for their clients? What would you suggest to those people? Ashley: Yeah, I think that like, first I would start with saying. Don't get ahead of yourself if you haven't mapped out and had the data conversation. Then you need to do that first. and this is something that I've been, that I'm actually probably gonna be, doing some workshops around just like I want to do. Like people can like build their first AI agent and like see it in production and not production, see it in working and bringing them value internally within like a short period of time. The problem is. You know, you've got, well, what do you use? Do you, you've got your Azure environment, you've got your third party tools, you've got your PSA, you've got your, where does all of your stuff live? And I feel like that is probably the unsexy conversation that I keep forcing people to start with. Where it's like, what is, figure out one area where you have confidence about where that data is. You know, like maybe it's your Azure, maybe it's your SharePoint. Like spin up some Azure Foundry, spin up an Azure Foundry account and like look at what's available in there because we could, oh God. Sorry. I'm struggling with this question right now because I have so many opinions. I don't want people to just, I don't want people to just like, oh yeah, I can like make my own chat bot. I can build my own app, but like, get used. I would say that people should be getting used to the experience, but there's also. if we're talking about the MSP, there's some. Maturity steps that need to go into that, right? Like, do you know what your zero trust is at your client's site? Do you have, a wall like a, like a, like a walled garden effectively that you can, that your people can experiment in? Microsoft has a good, AI maturity model and, workshop that they. Can that you can do internally for like a center of excellence kind of thing. I would encourage people to look at that because it basically handholds you, the framework I was gonna do. Um, I was thinking about doing a, a workshop around something like that. But yeah, I don't think that it's about selling to your clients right away. It's about having that conversation about readiness with your clients first. Todd Kane: Right. Ashley: where's your data? What do you want? Todd Kane: I think that's a important first step. 'cause like so much of this is data driven, right? Like the access to what data is appropriate and whether or not that data is clean and if you're gonna get good context from it. Ashley: Yeah. Todd Kane: part of what I struggle with here is I've been talking a lot about how we've been dragging people to the cloud. Ashley: Yeah. Todd Kane: people into security for a decade. Ashley: Yeah. Todd Kane: it really sort of, an odd situation and I maybe a little concerned like, not to throw shade at anybody, but a lot of the conversations that people are having about AI with clients are like, Hey, do you have copilot? Ashley: Yeah. Todd Kane: This is great. Right. Ashley: Yeah. Todd Kane: ai with the pace of change that, that we're going at. Ashley: Yeah. Todd Kane: feel your sentiment on this of like, well, okay, but let's also not race ahead and say, Hey, let me imple implement an open claw for all of your employees. Ashley: Yeah, Todd Kane: Right. Ashley: earn it. Todd Kane: that middle ground is so difficult, right? Ashley: And I think that's why it's like if you try to bring AI into the conversation, if you make AI the point of the conversation and not the readiness as the point of the conversation, then you're just gonna have like the, the eyes glaze waiting for the flashy thing, right? But it's like the, I don't know, it has to be education. Um, and then how does it make it motivational for them is something that I'm still trying to figure out too. A lot of thoughts on it. Todd Kane: It's one of the odd parts of the pace of change in AI is I find it really difficult to gauge where I'm at on the scale of understanding with this stuff. Like obviously the people that are working at companies with frontier models and stuff, like they understand this stuff intimately and very, very deeply. and there are times in the space where I feel like I'm at the front of the crowd and then times I feel like I'm absolutely at the back. But I can't really figure out if I'm in the middle of the front or the back at any given moment, basically. Right. Like, because it's so new. No one is really sort of a great expert on this and we're all kinda learning at the same time and, and sort of like, you know, jockeying for, for, for position. Not that we're competing, but just like I learned something that someone learned yesterday, then I learned something that they learned tomorrow Is, is constantly happening right now, Ashley: Yeah, and the conversation needs to be different for the business leaders than it does for like the technology leaders. And then, you know, like. Do you understand even the different failure modes, right? Do you understand the difference between what a fine tuned model and a, a rag assisted model is? Because if you're trusting one to be the other, you're gonna shoot yourself in the foot. and you're not gonna, and then you're not gonna trust it. And one of our biggest struggles that we always have, you know, Matt talks about this, Matt, Matt Lee talks about this all the time about the, the, um, the fact that the, uh, the seatbelt wasn't invented until like however many years after, right? Like framework, yeah. Governance follows. After. Right. And so it's like we're always gonna be, one of the reasons why we're always held back is 'cause we are a little bit of like the crabs in the bucket when new things come out. Like, don't do that yet. Don't do that yet. Don't do that yet. And it's like, but you know who is doing that? The people who don't have people telling them not to do that yet. And so it's like, we need to be more than ever now educating, right. And so educate them on what it is, what the different types are, you know, like. You don't hand somebody a gun as like the very first, like, Hey, you've never shopped this before here. Like, right. Like you kind of like, Todd Kane: a loaded gun. Ashley: here's what the mechanics of it, here's where this comes from. Here's the likelihoods that you're gonna like, cause damage to yourself if you don't do this or that. Todd Kane: Yep. Yep. Ashley: Yeah, Todd Kane: I guess like as with all technology, technology is, aerin can be used for good and evil basically. So, you know, Ashley: just like a toothbrush. Todd Kane: Well, this has been awesome, Ashley. I really appreciate you coming on and, and sharing some of your experience. and I think you're right, like start with education both for yourself and for, you know, what you can teach your staff, what you can teach, teach your clients because the evolution of this stuff is, is wild, but it is also a ton of fun and a great place to be doing some, some learning in the, in the nerdy kingdom basically. So it's been great. Ashley: Thank you. Todd Kane: Take care.