John, welcome to the Evolved Radio Podcast. Thank you so much for having
me. This is really exciting. So a good place to start, I think, is, is
your origin story. I think you have a really fascinating journey and I
think maybe relatable to a lot of people, but somewhat unconventional.
You want to kind of give us your background and how you got to be
where you are? Sure. So I found myself at one point early in
my life working in an aluminum factory, Alcan.
Aluminum Canada for 12-hour shifts, very difficult physical
heavy work, nothing to do with computers, forklifts and
machines and very dangerous heavy operations.
After a few years of that, I decided to make arrangements with the union to
go back to St. Lawrence College in Kingston, Ontario for
the 3-year computer programmer analyst program. So
I managed to give up my day shifts for night shifts so that I could
learn programming. But what was interesting is my placement out
of that college was more in IT, more of a
networks and desktops and Windows as opposed to
programming. So I pursued that aspect of
technology with certifications and eventually I did get on
as a senior network engineer at an insurance company. Ultimately, I
ended up as a senior network architect for the Parliament of Canada in
Ottawa. 50 buildings, a greenfield network, 3
data centers, 500 remote sites, national
importance, the network, right? So that is,
that was really where I learned a lot and
I started to embrace network automation pretty early
because building that network was one thing, but
operating it day to day was an entirely different problem. And the
scale and complexity really required us to embrace
automation and doing things kind of programmatically on the network.
So I self-published a book called Automate Your Network, and that's sort of when
I kind of tried to make a scene on the
social networks and LinkedIn and Twitter, and it all sort of revolved
around the first book I self-published about network
automation. And then I second, you know, a few years
later, I published a second book through Cisco Press co-authored
a book about Cisco PyETS, another network automation framework.
In terms of when AI enters the picture for me, I had access
to ChatGPT 3.5 in November of 2022,
but what it really changed for me was when I applied for an API key
and started connecting the artificial intelligence to my network
automation work. And I've been obsessed with
AI ever since. Every, you know, the, the introduction of
RAG, the introduction of plugins for ChatGPT,
CLIs, extensions, plugins,
agents, LangChain, LangGraph, all of it.
Every single aspect of artificial intelligence I've been trying
to consume and share publicly my journey
as I'm learning these things and as I'm trying to apply it to my trade.
Which is network automation, right? And how it fits there.
And it's really taken me places I never thought I would go. It's been an
exciting journey. You have this term that
I really like and kind of, I assume is riffing off of the
DevOps scene of Vibops. Do you want to give us kind of
your breakdown on this? Is this something that you coined or is this just something
you're championing? Well, I don't know if I coined it.
Maybe I did. I may have, but I sort of see that
we went through traditional ops into DevOps, into NetDevOps,
into AIOps, and I would say there's a bit of a difference. AIOps is
really applying machine learning, supervised and unsupervised
and semi-supervised learning over big datasets using
generative AI. But on this idea of vibe coding,
where anyone can do it and you apply your own personal
experience and knowledge and your
level of expertise to use natural language to drive
code. Why does the network have to be behind? Why does
infrastructure have to be behind? Unlike network automation, which took
10 or 15 years to catch up with what developers were doing,
can't we build our own agents? Can't I just use
natural language to say, how's the health of the Wi-Fi in
Vancouver? How is the traffic flowing today between
New York and Singapore? Right? Using natural
language to interface with this very sophisticated
technology known as the network, right? So I like to think that, you
know, it is sort of a mix of vibe and operations,
and it's using things like model context protocol, plugging
in the right MCPs to the right tools, and now
agentic skills. And developing the skills that your
agent has and using natural language to achieve
these goals. And it's wonderful. I think it's more accessible
than network automation and having to learn Python or
Ansible or Terraform to just be able to describe it in
your own natural language. So we can all become vibrators.
Excellent. I mean, it is kind of incredible to
me that like all the dashboarding and capabilities that we
have, the network is still largely relegated to the command line.
Interface, which makes it obscure for a lot of people. And I think it just
doesn't get the same level of attention and care that
it otherwise could, because I mean, quite frankly, like the network
layer is the most critical part of the whole interface, because if you can't
carry the data, it doesn't matter sort of what's happening on top of it. Right.
I think that's sort of a victim of its own success in a way that
networks typically prioritize stability. Over
innovation, let's say, right? So even if it could
save you 3 days on a weekend, some people are still
more comfortable just doing it the long way, doing it at the CLI, doing
it the way it's always been done. I think that AI is a real
opportunity to reevaluate what is
important in network engineering, the design, the security,
the guardrails, the uptime. And I think
that AI and these agents that we can build, if we blend it with our
own 20, 25 years of experience, can
really help, particularly in infrastructure,
because it is so obscure, because it is such a small
talent pool of CCIEs, of CCNPs,
of CCIEs or INAs, you know, and
the network hasn't been the most attractive field
for the next generation. They wanted to get into security, they
wanted to get into cloud, they wanted to do something sexy and
fun. So the network is still around, but
there's a lot of atrophy, there's a lot of huge gaps in skills.
And I think it really is a ripe field where
artificial intelligence can play a meaningful role, let's say.
Yeah, I think it is sort of a wholesale reboot in a lot of ways
because you know, in the companies that I ran, I was
somewhat frustrated by the fact that a lot of graduates from technical
colleges were, I felt, a little too specifically
focused on network skills when a lot of the things that we did in a
lot of those environments weren't necessarily network focused.
But the more I think about it, it's like, to your point, there was a
lot of opportunity in better managing the infrastructure. It was
just something we tended to not focus on. And utilized
maybe some tools or certain technologies that made those skill
sets less of a requirement. But
to the earlier point, like, I think there's a lot of
capability that gets missed of you don't
necessarily need a CCIE to diagnose, you know, an
SMB network environment with a single
layer, right? Like, it's just, it's not that complicated and you're kind of throwing
like the surgeon at, you know, a plumbing problem. I
recognize now that there was so much that happens in the
infrastructure, even if it's an uncomplicated environment that
could have benefited from someone generally understanding the network layer
better. You know, we have all this data and if the
AI is interpreting what's happening at, at that lower
layer, then you can get a lot more granular, a lot more
insightful information on how to manage a problem that's
happening. Because in IT, we see this problem all the time is like,
something's wrong. We're not exactly sure what it is. And it's probably
something of the network layer, but none of us really understand the intricacies
of how the packet layer works. And, you know, you made this joke in
one of your presentations of like, well, okay, I guess this is one,
this one time a year, I'm going to have to break out Wireshark and figure
out all over, all over again, how this thing works. Right. So
I, it's funny about the packet analysis because it would
literally, I know it sounds funny, but I was lying in bed and I sort
of sat up and went, hang on. I can put JSON into
a vector store and do retrieval augmented generation, something I had
already done. And then the other side of my brain was like, you
can use Tshark at the command line to turn a packet capture into
JSON. And these two ideas sort of married and mixed like
paint in my brain. And so that was one of the earlier things
that I tried was to see if I could upload a packet capture.
And just talk to it in natural language using artificial intelligence.
And it turned out to be wildly successful. There is online packet
capture exams where they give you the pcap and they give you the 10
questions and then the answers as well. And the AI was able to get
like 20 out of 20 on these packet capture
exams, which I thought was pretty neat. And it's, it's progressed to a
point where now with major hyperscaler models,
You can just upload your PCAP to ChatGPT and start chatting with it.
That's a capability it has now. It didn't have that 3 years
ago. So things move very, very rapidly. And I think
we should take advantage of these tools. I love Wireshark.
I've been a speaker at Sharkfest
in Europe and in North America. But like you said,
right, the 3 times, 4 times a year you actually have to break out that
tool. To prove it's not the network more than anything.
I've rarely used it to actually prove there was a problem with the network. I
use it to disprove the network as the source of a problem a
lot. So wouldn't it be neat to just upload the capture you get and say,
am I the root cause of this problem? Just in natural language.
Yeah, that one feels familiar to me. I was an enterprise Citrix admin
and, you know, everyone loves to blame the presentation layer is what I
used to say is like they're, The Citrix server is busted. It's like, no,
no, it works fine. Like it's usually a network or,
you know, sometimes an infrastructure issue. Quite often it's the
application. There's nothing wrong with the server. Right. So to that point, like you get,
you got to be able to split these things apart and kind of tease through
the OSI layer of like, what are, what is the actual source of the issue
here? Right. The worst one is when you curl the IP and you
get a response, but the URL doesn't respond by name and it's a
DNS problem. Never DNS hidden. Don't we know that already?
That's a long day ahead, which authoritative server is getting this
wrong, right? That's never fun. Yeah. So that, what
you mentioned about doing the packet capture in PCAP and throwing it into JSON,
like you have this project, one of your many, many GitHub
repos that people, I guess like people can just, can they just go
to the primary GitHub and then all of your projects are listed here? Is that?
That's right. So it's github.com/automateyournetwork
and all the projects, all the repositories are there for you to clone. And try
yourself. The more popular one this week is, is one
built on OpenClaw. So I saw the craze around this
OpenClaw and I thought, well, I better try to build one. And I connected
it to network automation tools and different network tools. And
it's pretty neat. I have it in the VibeOps forum with about 600
network engineers poking at it. And the first thing everyone tried to do
was either break it, uh, get it to reveal secrets
or get it to destroy the network. It was such a weird
social experiment to see, okay, this AI agent, artificial
intelligence agent is here. It's in the room. It's in the Slack channel. Go ahead
and ask it things. And immediately
everyone was just trying to break it, but, but it, but it held up its
guardrails. It's, it gave a report at the end of the day, sort
of, we asked it for a report and it held up to over 30
social engineering attacks. Over the course
of the 8 hours that it was alive in the Slack channel.
So, yeah, get involved with the community, join the VibeOps forum,
clone and star and fork the repo. It really is neat to
have a personal assistant, but that's been trained and given
skills around infrastructure management. Yeah.
So I guess like that, we'll come back to this with the, the, the PacketBuddy
piece. Like, are you saying like some of these are maybe more
deprecated because like if you just talk to an LLM directly, like it
already has that skillset, you don't necessarily need those projects anymore. Is that the
case? I'm sort of glad I didn't launch a company around using
AI to talk to packet captures because now you can just upload it to any
major model and they seem to know how to do it. I see.
Now, it's— I'm not trying to take credit for
this, but some people have asked, is it because while they were
learning and training the next iteration of the model, did they pick
up your work from GitHub since it's all open
source?, right? Did they learn how to do it from
your initial attempts? I'd love to know the answer to that. I
don't know. But yes, everything changes every day.
You know, these hyperscalers put out a tweet and suddenly, you know, your
startup idea is just now an AI can do it, right?
I reuse that code inside of that
NetClaw agent. And in Slack, now I can
upload PCAPs to my agent and my agent can talk to
them. So that's still pretty remarkable to think that you just need to
get your PCAP and then right from Slack, talk to the
AI agent and send the PCAP for analysis. So is this kind of
how you view like VibeOps shaping out in the
future? Is like, it is just sort of like a chat agentic future,
like that there's maybe in a co-work model, like you either
have specialized agents or even like a generalized chat where
you're literally just talking to the chat about what's happening in the
environment and it kind of like here's all the analysis I've captured in the last
5 minutes of the infrastructure. Here are
some ideas on potentially what's going wrong and why you're seeing these
alerts. Like, is that sort of like what you envision here? I think it's going
to be very much a human resources issue as much as it is a
technology issue. Meaning I wouldn't just turn a bot up on
day one, an agent, and plug it into production. It has to
be trained on my corporate policies. It has to be trained on
the guardrails, the change management requests, certain
network-specific guardrails, all the things that you would train a
junior, right? It's just like hiring a junior human. The first thing you're going to
do isn't going to throw them into the fire and have them handle some BGP
change. You're likely going to start with read-only activities,
testing, documentation, maybe minor tickets that they can
handle. I think it's going to reflect that human experience where you have
this agent start with human in the loop, human on the loop, human in
the lead, fully autonomous. And they're just like
digital, you know, I've heard virtual employees, I've heard digital
coworkers. And they're given a level of autonomy. They can reason and
now they can call tools, particularly in the form of skills
or MCPs. So it's changing very,
very fast here. And I read some Gartner report that
they believe 70% of IT operations
by 2030 will be augmented by AI in
some way, shape, or form. Maybe not totally displaced or
totally autonomous, but AI will be
participating in 70% of the work in my
field before the end of the decade, right? Yeah. I
like the quote or description from,
from NVIDIA CEO who said that the IT department will become the
HR department for AI. I'm glad that you said that. I was going to
bring that up, but I wasn't sure if people are sick of me saying that
because I really paid attention to that. I know people made a meme out of
it, but through my experience now, having built some
of these agents, it really does feel less like a
technology problem and more like an HR department problem. As
to where I would use these agents in my org, whose work
they can augment, how I expose
them, you know, securely, AAA and orchestration
and augmentation and orchestrating the whole thing together.
Do we need supervisor agents? You know, how high up
this org chart do we build the hierarchy? It's an
architectural HR type issue, right? Yeah, that's really fascinating. I think that's
the only way that this works. I mean, The other sort of HR-ish
type thing that I'm curious about here to get your read on, because I've sort
of debated this in my head and chatted with a few people. I'd love to
get your, your thoughts on this as well is like, what does entry-level
job in IT look like in the future? Right? Because it used to be that,
you know, you would have a junior role, like a level 1 tech
who, you know, does a lot of the triage and some of the basic work
that's routine and has SOPs built around it. And you kind of get some
level of exposure to what's going on here. And it's almost like we're getting to
a space where we're almost eliminating that role and the necessity for
it entirely, because if it's repeatable and it has documentation, then,
you know, just stick an AI on it. But then what does that mean? People
just start at level 2 and they just chat with the,
the AI agents that are monitoring the environment.
And if that's the case, then like, how do you get into that job? And
does it require kind of any technical know-how at
all? I've given this a lot of thought and it does come up and it's
sort of, now we're sort of into the almost an
ethical philosophical discussion
about, um, you know, what humans' role should
be and is. And I wish I had a good answer and I'm not trying
to skirt the question, but I think
that in my utopianist outcome, in my
mind, those juniors can be accelerated through their career faster.
They don't necessarily have to spend 25 years like I
did to become a so-called expert in some of these things.
I think that their access to AI
and to senior engineers writing these
agents, it's a way to transfer the knowledge and to pass the torch.
I don't think we should be pulling the ladder up on the juniors.
Right. We still are going to need humans that understand
how these agents are written, how they're built, the tools they're built
on, the principles that guide them. There's still very much a place
for human beings here, but I think that it should
democratize things in a way similar to Lotus
1-2-3 or Excel did for accounting. Right now,
are there more or less accountants after the
spreadsheet? You know, became widely available. I would suggest more,
right? I would suggest that they, that, that there's
more spreadsheets, more data, more accountants.
It's just that their job is a little bit different. The job has
changed and they're working at more elevated levels in the
company. They're working on strategy. They're working on investment. They're working
on taxes, right? Accounting changed as a
result. But it didn't eliminate accounting. I'm trying to build an agent that you
can augment your tool, your, your team with
to handle a lot of the
mundane, repeatable, safe, you know,
low-hanging fruit through just natural language, right?
There's always going to be work. There's always going to be work in IT and
in networks in particular, just because we have these agents
doesn't suddenly mean we don't need the humans. And through my own
experience, I went through the same
dismay with network automation. You're going to automate yourself out of a job.
You're going to automate, automation is going to wipe out network
engineers. What I found was when I automated one thing,
they didn't just say, well, thanks, great job. You know, see you later. We don't
need you anymore. You automated that one thing. They asked what we
could automate next. So I became more valuable to
the company after I started introducing automation, not
less valuable. Right? So I think it's similar with
artificial intelligence. If it's John's agent, John's not
going anywhere. Right? Right? But if
you're just consuming John's agent, if you don't have an agent of your
own, then you might want to, you know, start to contribute in a
different way, right? So this is interesting too, because like
I'm curious to get your thoughts on the
practical implementation of some of these strategies for, you know, some of the
people listening, typically managing, you know, 30
to 130 clients at a time in a
multi-tenant environment. What is your thought on sort of first steps
in exploring the utilization of this technology? Say like they've got an
RMM, like maybe they've got, you know, PRTG or some basic kind
of monitoring tools in place. But the, the, the level of the
sort of view of visibility on the network in particular
and infrastructure as a whole is fairly rudimentary. Like, where would be
the first place to explore? I can't imagine it's, you know, OpenClaw is
necessarily the first place to go. Could be a little dangerous, but any, any other
suggestions on the entry point here? I think MSPs are in a
very supremely unique position to take advantage of
this in that they can start to build digital representatives that
represent the portfolio under that client. Right now, that
agent is going to be specific and bespoke per customer. But
if you can start to connect it with things like retrieval augmented
generation, accessing your knowledge base, your PDFs, your,
your spreadsheets, your Salesforce,
your Jira, your Confluence, your Atlassian, right? Start
to list all the tools that you use or that a human would
use to best manage that tenant. And think of it that
you could have a chat interface into that and ask it
where the pain points are, where the latest tickets are, what
the status of the project is. Think of it as
a practical employee that you're going to build up
and attach tools to it. And those tools are going to reach out and get
the external data that the model needs.
Now, I mean, be very careful. Don't do shadow AI here. This, this
does require a level of enterprise agreement and a
private LLM and an API key and approval from
your departments. Don't just do this on a
YOLO DIY sort of thing. There's a lot
involved here. However, you can start to build agents
with agent development toolkits, ADKs.
Open Claw, correct. It's a personal assistant sort of thing
right now. It's probably not commercially ready, but Claude
Code, Claude Desktop, Cursor,
Antigravity, ChatGPT, Gemini, the list goes on and
on. I think just starting somewhere and maybe starting with
a roadmap and a plan of, you know, by the end of
March, let's have one agent, right? And then by the
end of April, let's see if we can scale that to 5 or 15 agents,
right? And start to see how you can augment your workforce.
And augment yourself. How could you build a little personal assistant and
what tools should it have access to, to make your life
easier and better and more productive and more fulfilled? One
of the, the sort of the central pieces to this is like creating boundaries around
it, like what it can and can't do. The HR policy for the, uh, for
the AI, I think is, is really, really important here. The privacy
implications of this, I think are also really massive, right? Because you're dealing with other
people's data, sensitive information in some cases. You mentioned kind
of using a private LLM, not just sort of,
hey, I, I, I got a free key on GPT. Like that's never the way
to go, but you know, what about using the
public models versus private models? Any thoughts on sort of, you know,
roll your own, keep it in Ollama, run it local versus utilizing
like say a secure container in Azure or an API
key from one of the, one of the major brands. That, that is
an avenue. And I really suggest people look into that avenue,
particularly with, say, personal things. So
yes, Ollama, LM Studio, Microsoft Foundry Local, there
are very, very capable public models. By the end of this
call, you could install any of those three and literally have a
model locally to chat with. So they all have
REST APIs, so you can do programming against them. You can write
these agents against them. Most of the latest models that
are open source can do tool calling, which lets you do
these reasoning and action agents. But there's, you know, there may be
hardware limitations there, right? We now we're starting to talk about the
size of the model and the number of parameters and the GPU
or CPU that you have locally. That is another very
safe offline avenue is looking at
open source models. There is still a quite, I would
say quite a big disparity between the quality
though of a private model, like an Anthropic Claude
4.6 through a private key and an open
source model, but it's a horse race, right? Things get better, things improve.
Who knows what the next model around the corner is going to be capable
of. Yeah, I guess so, like from my perspective, like, especially in this use case
of what I'm thinking, you know, obviously if you're doing, you
know, vibe coding, then, you know, obviously Cloud Code 46 is
the go-to, you know, but if you're just doing kind of packet
interpretation and network statistics and data collection, like I feel like
a local, a local model with a fairly decent, like
off-shelf GPU doesn't need to be, you know, a 3090 or something like that. It
could be something pretty decent, but not, you know, blow their socks off.
It'll probably churn through that, albeit maybe slightly
slower than, a more private model from the cloud, but you know,
then it's local. It's, you know, maybe 10, 30% slower,
but it still does the job. Free. And there's a huge upside to things
being free, especially when you're in an exploratory phase. If all this is
new to you and you just, you know, you don't want to put another thing
on your credit card, but you don't want to be held back from starting. There's
a lot to be said for Ollama and LM Studio and
these local models. There really is. I completely agree
with you. And, you know, some people really value their privacy. Some
people really don't want to be using these datasets
in particular with a public model, right? And it has all
the advantages. It has a REST API. You can program against it. MCPs work
with it. Yeah, really good suggestion. Yeah. Okay. On the
sort of vibe, vibe coding, vibe development, what are your thoughts on, I
guess, two things is. I mean, to lead into the question here is why do
you think like the tools vendors have been kind of slow to get
on this, this train? And granted, you know, like it's not been a long time
and we, we can respect that enterprises move at a, at a different
pace than, you know, us hobbyists do, I suppose. But I find it
interesting that some of these, these capabilities have not already shown up in
some of the, the industry tools already. Any thoughts
on the speed of development and the speed of application for some of
these capabilities? Yeah. So on the whole vibe coding, I think that we're going
to, I think we've maybe even reached a point where it's just called
coding now. Right? Like, I think everybody's doing it this way. So I
think we should just call it coding. Yeah. Because just quickly on that,
because like you see people arguing that like, oh, well, you
know, it creates trash code and, you know, I would never use that.
But then the other, a lot of decent developers will argue like, look, I've
had some people on my team that were terrible coders and it's like, trust me,
Claude is a much better developer than these people. You're not going to win everyone
over right now. Here's the one thing that I like to remind people when they
say to me, well, you're not reading all this code. Like you don't really
understand what it's doing. Well, that's a level of abstraction to me. I think
that's a positive. I think that's on the good side of vibe coding, not
the bad side of vibe coding. How many Python packages do you go
to the PyPI? Github.org and look up the source code
of the Python and the libraries that are included. How many times do you
go to the npm every time you Node install
something? Come on, let's be honest with ourselves, right? The one caveat I would make,
John, is like, if you're going to vibe code something and release
it publicly and use it in some capacity, especially if you're going to sell it,
for the love of God, get somebody who is very qualified
to sign off on it. Especially from a security standpoint, but
also from production ready, right? Like maybe you haven't read it,
but maybe someone should, you know, I think that's very fair. Anything
you're going to charge money for, maybe, you know, you should
have some rigor around that, but why the vendors are
behind that really bothers me. It really does because, and I,
to be fair, MCP is, let's just call it for, you know,
16 months old. So it's about a year and a half old now.
But I'm just like, where are the MCPs for all these
platforms where I can just plug them in? Now, I don't know if it's because
it is too easy, because there's a lot of revenue at stake
for support and professional services. And there's a lot that goes into
being a vendor, right? And I don't know if they're
just being overprudent and cautious to not
give away the keys to their
monetary success., right? If there was suddenly an MCP for
Vendor X's tool and you don't need to buy their platform for
that tool anymore, maybe that's what's holding them back is, is sort of, they
don't want to cannibalize their
own commercial platforms by really, by getting involved
in MCP. But I don't know. I think SMTP
and HTTP, these protocols that everyone can use
and build upon. Got us to where we are today, I think they can
find a way to monetize MCP and to
still, to still be innovative and keeping up with
the tools available, but, but not losing their shirt, right? Another use case I kind
of wanted to explore with you is something that I find is particularly
problematic. I've had a few conversations with some groups recently in the past couple
of weeks about alert fatigue and the difficulty of
getting the signal-to-noise ratio right in managing complex
environments. Where like you can monitor everything, but, you know, 10,000 tickets on a board
is not helpful to anybody. Or you can tame down the noise and
potentially have something go unnoticed, and then a VP is super pissed that you guys
missed it, right? I have to imagine there is a combination
of your tool sets as well as an Agentic capability that could maybe help people
kind of run up the middle here on what is actually important in
this pile of noise, and therefore how should we tune our
alerts and our RMM for management around that? What are your thoughts on that as
sort of an exercise that people could take on? It's
read-only. You're not affecting change. You're not disrupting
flows. You're not, you know, doing the hard thing first,
but it's valuable, extremely valuable. And what a great use
for generative AI that predicts tokens and can do correlation
and root cause analysis to point it
at 10,000 tickets or 6,000 tickets or
whatever and say, boil this down into 100
tickets. Make this something that a human can consume, find
the patterns, find repeatable tickets, find, you know what I mean? Like,
what a great use case. And then from there, maybe have
the AI build on top of that. Now that we're
down to 100, recategorize them, break them down further, and
give me some suggested code on how to roll out the fix, how
to deal with this issue, right? So you start to get
into solving the problems after you sort them and boil them
down. Into a human consumable level, make the
tickets, make the plans, make the order of operations, right? Come
up with the test plans. What would success look like? What would
failure look like? I think that's a wonderful use case. And most people want to
start with read-only and human in the loop. And what a safe exercise
to start with is just giving it access to your ticket
farm and seeing what knowledge you can get out of that data, right?
With that, I want to be cognizant of your time here. Sure, maybe
we can connect offline and have a ton more conversations because I got through
like 20% of what I want to ask you for your list of questions. Okay,
okay. Well, I can come back. I'd be more than happy to come back. I
love the tenor and tone of this discussion, so we should maybe wrap it up
and then I'll come back in a few weeks. Okay. Like
from that, like what are the practical steps? You know, like we talked about, you
know, maybe roll your own, look for some internal use cases.
Are there particular resources that you would point to in the audience
of, you know, MSPs managing, you know, 30 to
130 clients? What would you say is sort of the important takeaways of like next
actions for them to look at for the raft of information and
content that you've created both in books and open source projects? And, you know, I
want to thank you for that. Like the fact that you're creating all
of this information and open sourcing, I think is incredibly valuable and
really, really admirable. Well, thank you so much. I, um, honestly, I
think, and I don't want to lose anyone who's not a programmer, but ignore that,
but set that aside in your mind for a second. Go ahead and
download VS Code, Visual Studio Code, and it comes with
Copilot inside of it. Copilot is free for X number of calls. There's
enough there to get you started. And now you have an integrated
development environment, an IDE, which you can edit files and do different things
with. And a Copilot to help you write
code, write emails, interrogate the files that are open in the
IDE. And then once you're comfortable there, get a little bit of
comfort with that, look into how to add an
MCP to your Copilot. There's going to be a little snippet of code. You've got
to plug it into your Copilot, and now your Copilot has
access to tools. Think of the top 5 things you do every day. Are
you in Salesforce? Are you in Atlassian? Are you in Jira? What
is it that you need AI's help with? And see
if they have MCPs that you can plug in.
From there, the sky's the limit. You're going to find more and more MCPs. You're
going to find day-to-day use. You're going to start getting
your $20 a month value out of your AI. Uh, and
you can also do all this with free open source models.
So, and reach out, feel free to connect with me on social
media. Um, or if you have any questions at all, if there's any sharp edges,
just let me know. Okay. I'll link to your LinkedIn profile on,
in the show notes. Anywhere else that you would direct people, your,
maybe give your GitHub address, I'll link to that as well. Any other places
you would direct people to? I would find my YouTube channel. I think
it's going to be really help people who want to get on this journey. I'm
on Twitter, X still, and LinkedIn is a really good place to find
me as well. I want to thank you for having me on the show. This
has been a lot of fun. Really appreciate your time, John. Thanks. All right. Take
care. See ya.