The n8n Masterclass

7 Steps to Automate Any Business With AI with Dave Ebbelaar

• Dylan Watkins - n8n • Episode 21

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 40:38

Most businesses want to automate but few know where to start.

In this episode, AI engineer and agency owner Dave Ebbelaar walks us through the exact 7-step framework his agency uses to take clients from idea to launch.

Whether you're a freelancer, agency owner, or business looking to integrate AI, this episode breaks down a repeatable process that reduces risk, sets the right expectations, and actually delivers ROI.

🔑 What you'll learn:
→ Why you should use as little AI as possible (even at an AI agency)
→ How to identify high-value, low-risk automation opportunities
→ Why mapping your current process before building anything is critical
→ The difference between deterministic software and AI systems — and why it matters
→ How to build safeguards so AI failures don't reach your customers
→ What success metrics to track after launch

⏱️ Timestamps:
0:38 - Overview: 7 Steps to Implementing AI in Business
3:01 - Step 1: Discovery
8:48 - Step 2: Prioritization & Building the Business Case
14:20 - Step 3: Map the As-Is Process
19:23 - Step 4: Map the To-Be Process
23:57 - Step 5: Prototyping & Proof of Concept
28:42 - Step 6: Adding Safeguards & Controls
32:37 - Step 7: Launching & Success Metrics
37:10 - Managing Client Expectations with AI

One tool Dave recommends for mapping workflows: Business Process Model and Notation (BPMN). If you've never used it, it's worth checking out: https://en.wikipedia.org/wiki/Business_Process_Model_and_Notation


#n8n #AIAutomation #WorkflowAutomation #AI #NoCode

Get 30% Off n8n Cloud Starter or Pro Plans!
Want to get started with n8n? Visit n8n.io/pricing and use code 2025-N8N-PODCAST-729C416E at checkout for 30% off your first month or year.

SPEAKER_01

Even though we market ourselves as an AI agency, we usually like to use as little AI as possible. The things that we're trying to automate are simply too big of a process to start with, too big of a scope. We kind of like shadowed people for a day, just walking around asking people like, hey, what are you doing? Just like looking at their computer. You'll be surprised how many people are like still doing that on a daily basis. We don't get to 100% immediately. Usually you start at 70, 80. We're going to talk about how to implement AI and automations into businesses.

SPEAKER_00

Dave, welcome to the podcast, man. I'm so excited to have you on. And I gotta ask you, so what are we learning today?

SPEAKER_01

Thanks for having me. We're going to talk about how to implement AI and automations into businesses. So it's going to be exciting.

SPEAKER_00

Yeah. And this is something that you say you have seven steps to integrating and making this possible. Is that correct?

SPEAKER_01

Yes. I'm going to walk you through the seven steps that we always run in our agency whenever we start with a new project, all the way to the end where we pretty much go live. We can break it down into seven phases where every phase is really important. And often if you miss one of those steps, it can result into either like a failure of the project or just problems that end up later down the line.

SPEAKER_00

And I know you have uh you're pretty early in the whole AI space. I'd say before the whole AI hype wave. Can you give me just a little bit of your background that led you to this journey?

SPEAKER_01

Yeah, sure. So I've I've been uh I've been in the AI industry for quite some time already. So I started a bachelor's in 2013, like out of just curiosity, not really knowing what AI was at that point, but just followed my natural intuition. And then later in 2018-19, did a master's degree in it as well. So back then AI was a lot different from what we usually nowadays consider AI. Um it goes AI goes back like even way further to the 50s. But when I was in university, it was mostly uh focused on machine learning, training custom models. So out of uni, I started working as a data scientist where I did just that, training custom models on large data sets for enterprises mostly. But as we all know, in 2022, end of that year, ChatGPT released, LLMs became a thing, and I pretty much went all in on what you now call AI engineering. So really working with these language models, building automations uh and systems really for companies. So that's kind of like my journey. So I've I've I've seen a whole bunch of uh progress and new things over the past decade already of being in the field.

SPEAKER_00

Yeah, it's evolved quite quite a lot in the space. I know in 20, no, 2019, I took on some AI for investment. And back in the day when we were doing that, you're trying to train the AI models, and you have to there's it was much more difficult to really do anything. And now there's a lot of plug and play that makes things a lot easier, which allows us to do a lot more and a lot less time. So excited to jump in this for you. So let's kick things off with what is what is step one of this process?

SPEAKER_01

Yeah, so when we talk about the seven systems, right? So the seven seven steps, right? Whenever we start, um, when we take on a client, you always start with discovery, of course, right? So this is the obvious one, you need to start somewhere. But it's where a lot of people also um can make a mistake because often what happens when clients reach out to us, they already have a certain set of ideas where, hey, we want to automate this. I've so I've I've seen this online, I want to try this, let's work on this. And we found over the years that in the beginning we were uh really excited about all of these opportunities, right? So we would kind of like blindly follow the customer, the clients, and like, hey, it's your business, you probably know what it is that you want, and then start automating things. And while in some cases, this is totally fine. What we often found out is that the things they were trying to or wanted to automate are simply like too big of a process to start with, too big of a scope. Or it's, for example, directly customer-facing. So, for example, putting a chatbot directly in front of customers instead of first doing it internally, of taking a process where the risk of an error slipping through has a high impact versus a more kind of like safe backend process that is a lot safer to get started with. So that's really something that we learned in the beginning, where we really need to do some discovery, really, really need to figure out look what's going to be the starting point, right? And this all and pretty much what you do is you just ask a whole bunch of questions. You do an interview process. So if you're watching this and you want to sell services as well, if you want to do this in your own company, create a list, create a backlog of all the things that you can potentially do and rank them essentially, prioritize them, create a scorecard pretty much much of what's what's the impact. And then from there, you already are you are already in a much better starting point to uh to get started, really. So, yeah, that's pretty much the discovery part. That's not really the most interesting part. It will get it will get better as we go down the line. Do you want me to kind of like continue with this? Do you have a question on that? Or should we just what's going to be the flow of going through all of these seven seven steps?

SPEAKER_00

Sure, yeah. I'll we'll unlock uh the different phases as we go through this. So, one or two questions around this initial phase. When you're going to this, it sounds like you don't want to affect the reputation of the business when you're getting started. And that is you put you know putting a front facing one that can impact the their actual clients or their customers can have a really negative impact. So you want to find something that is both low risk and high value. When you're looking for high value, low risk automations inside of a business, what are a couple of those use cases that come to mind?

SPEAKER_01

Yeah, great question. So, um, like I've said, it's always easier to start with in with internal processes. So we talked about that. So start there first. And usually also we'll we'll cover that later in the uh in the steps as well. But what with any process or system within a business, usually a lot of people they see kind of like the system or the thing or the process as a whole, and then they think, look, we we need to we can automate that. But usually what's way better is figure out look, how is what are all the components of the system, right? So how is it broken up? So do a little bit of digging, and this is often trickier to get into because information and processes are often fragmented maybe across different departments. So someone may be looking at a process in a company, it's like, hey, that's very cumbersome. But like we have information coming here, and then someone needs to store the information there and then get it out of the system. You know, every company has these types of processes. So what you then ideally want to do is figure out look, what are what are the processes, what are the steps that we can start to automate without using AI. So this is always the starting point because one of the most tricky things when you're working with AI, which nowadays mostly means you're working with language models, is they're non-deterministic. That makes them very powerful, but it also makes them unpredictable unpredictable in certain scenarios. So then what you want to do is we always start, even though like we market ourselves as an AI agency, we usually like to use as little AI as possible. So while that sounds counterintuitive, you want to build the cool things, whenever you can skip the AI part with like a simple rule or a simple router or an if-else condition, that's always better than putting an LLM there that makes the decision for you. So I would say that's a big starting point. Map out processes, start internally, break up the processes, and then figure out look, this is a simple if-else, this is a simple router.

SPEAKER_00

So when you're doing that, at what point do you know in the discovery process that you have all the information and you're ready to move on to the next step?

SPEAKER_01

So usually the uh it's a backlog of ideas, and you should just ask around, like depending on the size of the company, like get as much people involved as you can, create a whole list of ideas. And the good starting point is not like thinking from what can we automate, but what are people complaining about? What is the process where they say, I don't like to do that? Where are people um making mistakes? Where are manual errors coming in? That's where you should look into, and then kind of like how you wrap up phase one, it's just it's a list. It could be a document, it could be an Excel spread sheet. But this is really exciting because this now, look, here are all the ideas that we have. Let's now go to the next phase. That's how you how you uh know when you're ready.

SPEAKER_00

Got it. So you gather, you list, then you prioritize, and then you say, here's what I'm seeing out of these 37 different ideas. Here's the top three that are gonna be low risk, high value. And I recommend you know, one, two, or three. Which one of these makes sense to you?

SPEAKER_01

Exactly. And that's already like we we moved a little bit into step two already of the process because that's more on the prioritization part. So discovery prioritization go a little bit hand in hand. But what's really important to add to that, specifically for the prioritization, is the business case, right? So, of course, we're running a business, we're creating these automations in order to solve a particular problem. So, next to okay, what's the impact? We can also see, look, this process, how many people are doing this? How much time is it taking on a weekly, on a monthly basis? So we can start to map that and we can put some internal rates at that to kind of like get an idea of what something is costing. Now, it's good to make a distinction here between look, there's actual man hours that go into a process, but there's also, for some processes, errors as a rev as a result of doing it manually. So you should look at both sides of the equation and then figure out, okay, what is this actually costing me? So you know if it's actually worth your time to start digging into it and to start putting engineers on it in order to automate it. So it's taking all of those things into the uh into the equation, also the business case, and that's why you get your uh prioritized list.

SPEAKER_00

So with that, it sounds like a couple of the different factors you're looking at right now. You're looking for how many hours, maybe how many errors, and also maybe in terms of how long does it take to respond? Because there's also this this not only time saved, but also time to response. I could imagine these are a couple of different factors. Are there any other factors when it comes, you talk about use case, priorities? Are there any other factors that you're looking at in order to help rank what priorities make sense?

SPEAKER_01

It's also complexity. So, like the low-hanging fruit, right? So there are just pretty much in any business, whenever you do an audit like this or you go through a process, there's little wins. And that's also one of the things that I, for example, love about tools like NATN. Even in our agency, right, where mostly we do uh we build custom solutions with like custom code, sometimes you have these low-hanging fruits where people are just like passing information between two systems and you just show them, look, you have a CRM. Whenever a new record here comes in, you can just tie it up to this other system that you're using, and now the the data is uh you don't have to move the data between two between two places. Now, it could be that it just takes a couple of minutes and it only affects a couple of people. So, like from the business perspective, from the prioritization, it doesn't make a lot of sense, but it could literally take you five minutes to build it up, right? So that's also something you should take into consideration.

SPEAKER_00

Sure. And it sounds like, and we're now at dead center in the step two of this process of prioritizations, that you're you're looking for quick wins. How can I deliver something quick to say, hey, we've already done something? We we are a week in to working with you. We found this low-hanging fruit that, like, you know, Jenny goes into Google Sheets, she grabs the record, and then she takes that record and she uploads it uh maybe into Notion as a back end knowledge base. Like that. Yeah. So why do we do that? Why don't we just have uh an automation in place that solves that problem? So, what's a quick win to already deliver it and turn it in so that they can get that sense of progress and moving along? Yeah.

SPEAKER_01

So that's that's super important. And that's also one of those things where we go back to uh what I mentioned in the beginning, right, with the discovery. If you go and talk to people, they usually have these big ideas like we want to automate our entire customer care system, or we want to build this document analysis tool where every document in our company goes through it and AI will do a review, right? So these are like big problems that you can also definitely tackle. You can build systems around it, but this is usually what people lead with. So that's why it's important to do the discovery, do the prioritization, and talk to people. Because from an outside perspective, if you're if you're running a service-based business or even if you do it inside your company, you don't really know what what Jenny from HR is doing, moving data between Google Sheets, putting it into Notion and the CRM. Like everyone has their own kind of like little systems that they have. The only real way uh to like get to that is like to talk to people. We've even had projects where we kind of like just we kind of like shadowed people for a day where we just like were on in the office, like just walking around asking people like, hey, what are you doing? Just like looking at their computer, what they're doing. This is one of the best ways to figure out where you're like, hey, what uh how are you really doing it like that? And then you just kind of like show them that there is just a simple automation for them that they can install in five minutes, and you'll be surprised how many people are like still doing that on a daily basis.

SPEAKER_00

Yeah, I could see that. And also apologies, Jenny, if someone's from Jenny from HR is listening to this.

SPEAKER_01

Jenny from HR.

SPEAKER_00

But it sounds like another thing you could do virtually is like, hey, one of the things we want to do is let's jump into a Zoom session where I'm just gonna, or Google Hangouts, and I'm just gonna watch you work and I'm gonna ask you questions as you go about your day. And that part of that onboarding auditing process is a is a really good fundamental deep dive because there's a lot of unconscious patterns of behavior that humans do that they don't realize. You kind of get blind to these crutches. I know like in the development world, when we were developing a lot of things in my other business uh back in the day, the developers would build something, they wouldn't realize that it was a that there was a bug. They just worked around it and they became blind to the limp that they had in the business. And so the same things happen. You just kind of deal with it and you kind of think it is kind of business as usual. You don't realize that there's a better way until someone can see it, call it out, and say there's a solution to this that would actually take this headache off of you. Exactly. Yeah, cool. So phase three, what do we got for phase three here?

SPEAKER_01

Step number three, that is what we call the map as is process. So this in in software in engineering in general, this is very common, but it's often neglected. So, what does this mean? You literally go to a whiteboard, to the drawing board, or you can do it in Figma or whatever tool, and you're going to draw out the process. This is super important because often people have an assumption about how a process works. But if you actually ask them, okay, like can you create a diagram of what the process looks like? What you'll often encounter is there is some sort of process, but there's also a whole bunch of kind of like side pathways that people can also take, edge cases, exceptions, and just generally figuring out that people don't really follow the process at all or just have it in their head, right? And the first thing that you need to know to automate is look, what does the current situation look like? And this can be really tricky in the beginning, again, from a service delivery perspective, because in the beginning, you are you're doing the discovery, you're asking these questions, and often early on, people want some kind of like proposal or an indication of hey, look, what how long is it going to take? What is it going to cost, right? So the the like the discovery and like these first like steps that I'm going to go through, actually up until part four, that is also something that you can kind of like sell or offer as an audit it on its own. So that's how important that is. We also sometimes do kind of like we go straight into like the full project where we go through the entire process and also do the build. But it's also really important to sometimes just sell it as a separate phase because there are so many unknowns. This is where you're going to figure out ah, okay, so you asked or you said that the process would be okay, it starts over here, then we go to that, and then we need to review that, and that's the final output. That's the automation, that's what we need to do. But now that I'm looking at your data over here and I'm seeing all of this, I can also actually see that we have this, this, this, which is actually an edge case, which would really be a new feature. And especially if you're communicating with non-technical stakeholders, often for them it's very hard to understand what actually does a particular feature do, right? Especially if you talk about AI, they think about okay, we're going to create an AI automation. So they think like if we put the AI in place and we just tell it what to do, it can handle all of the aspects of this process. But that's not the case. You really want to create separate pathways to handle all of those usually different data points and different outcomes correctly. So that's why you get to the drawing board and you make a clear diagram, as detailed as possible. Like there's this um there is a tooling that you can use. It's called BPMN, so business process modeling notation. You can look that up if you're interested. It that's the let's that's the most technical way, that that's the most um correct way to do it, because there's a whole notation and there are rules for it, and you have tools for that as well. But you can also just like go to a drawing board and figure out as long as you understand it and your stakeholders, it's good.

SPEAKER_00

So it sounds like also if there is a business and they want to automate it and they want to reach out to a company to be able to have these things done, one of the best things that they can do because is being able to just get into a whiteboard, whether it's a group session or individual, and say, let's map this out. And I want you to be able to explain as if you're gonna hire a new employee to come on and take over that business function and walk them through the step-by-step visually. And then one of the things that I love about the visual creating a map of what's actually going on is that there's a collaboration of understanding, because we're we're visual by nature, and being able to track out the process of, okay, you're gonna open up your computer, you're gonna go into Excel, you're gonna enter these data points, there, and then so forth and so on. All and say, is this the actual steps that you do? Because almost always, you know, this is a series of iterations. They go, well, kind of, but then what happens is I also need to download it and I need to save it, and then I need to upload it to this other file format, and I need to re-upload it because I can't, for some reason, the knowledge base doesn't take that file format or whatever the situation might be. So it's being able to iterate on that because the clearer that map is, being able to explain it for someone who has no idea what your business is going on, it's gonna be a lot easier to automate that process, whether it's AI or not. Totally.

SPEAKER_01

Yeah.

SPEAKER_00

Amazing. Okay, so we're clear on this one. I didn't know. Can you say that again? It was the business markup. What was that again?

SPEAKER_01

So it's business process modeling notation. BPMN. That's the abbreviation for it.

SPEAKER_00

Okay, all right. I'm gonna look that up. Actually, I was not familiar with that myself. Appreciate that. Beautiful. All right, so once they have this map, is there is there anything else you want to say about this map before we move on to the next phase?

SPEAKER_01

Um, I think we've covered the most important things. Ask a lot of questions, make sure you capture the edge cases, make sure uh you start from a common starting point. So both you, as the technical person involved as well as the other stakeholders, understand it. Um, and then the next the next part of the process, which goes hand in hand with step number three, is map the 2B. So you map the 2B process, which is okay, what should this process look like? And that usually means creating a copy of the current map that you've just drawn, and you're going to change the blocks, you're going to change the lines. And instead of maybe three blocks that you have in there, let's take the example of someone like pulling data from like a spreadsheet, and then you have an arrow to Notion, and then you know, a manual like copy-pasting uh process. Now you remove those three steps, and there's one kind of like one block, and you say, here's an automation that pulls the data from this and then sends it to Notion via the API. So now you can start to um create these like current situation, desired situation. This is also what you would call uh actually, if you get more, if you if you start to use more and more AI components in it, this is also sometimes referred to as a cognitive architecture. Really the architecture of the decision-making logic and processes that the AI needs to make at certain points. And now, again, you can go very detailed with it. I've some people in my community who make like the most crazy insane diagrams, like really nitty-gritty, that you can almost like feed into AI right now and it will build the entire thing. Like that's how detailed they are. But again, the most the most important thing to understand here is that both you and the other stakeholders involved understand, okay, this is what it looks like right now. This is how we want to automate it. And instead of doing the entire thing, we start with these building blocks over here. Because if we can just swap these out and create an automation around this, we can test the system, we can stabilize it, and then maybe later we can do kind of like the the la the other few uh components of this system. So this is also again where you can start to reprioritize again, now that you get a better understanding of the system as a whole. So it's just a lot of drawing on on a like usually digital whiteboard.

SPEAKER_00

Yeah, and I can see that, and I can see the process repeating it again and again. So if you have the top 10 items, you rank them out, you start with number one, and you say, okay, number one, we want to automate this. Okay, well, then what's the what's mapping as is versus what is mapping the desired outcome? You overlay that, and then you take certain sections that you're gonna then chunk out. And so inside of that process, there let's say there's three areas that could be automated. Then you can cool for this next phase, we're gonna automate uh you know, this step one or this automation, and then as we get these done, we're gonna deliver them to you either for testing or um to just roll into some back end of a staging software somewhere to be able to have them apply it. And so then it makes a real clear process because you could say, okay, what we have Here are the top 10 automations to automate your entire company. This will probably take 12 months. We're going to start with number one. And you know, we plan by six months in to finish the top three that will deliver some sort of ROI on the backside, which we've already defined inside of the discovery process because we already know why they're ranked so high. So we can almost aggregate the total value that they're going to be getting. Time saved, money earned, errors fixed, and say, cool, by six months in, you'll you'll reduce your error rate by 92%, you'll save 300 man hours, and you know, uh, you'll be able to take a two-hour siesta in the afternoon, something like that.

SPEAKER_01

I mean, that's the best case scenario, right?

SPEAKER_00

The the hopes and dreams of AI is the more automation.

SPEAKER_01

Hopes and dreams. Yeah, yeah, yeah. So so exactly, yeah. And depending on the complexity of the project and the automation, this is also where you can create all the documentation, right? So one thing is getting it out on the on the whiteboard, but you can also create the underlying documents, product requirement documents, PRD, and splitting up all the requirements for the individual blocks that you've created, right? So you could even go as far as creating user stories around this. Again, this all depends on hey, are you working with enterprise, big teams, or are you just working with SME? Are you working with the founder directly? And do you yeah, do they really want to implement this tomorrow? And can you just experiment on the go? Depending on that, sometimes you can just like create a drawing and start building, start tinkering. Sometimes you need to be more formal, create the proper documentation um underneath it as well.

SPEAKER_00

Yeah, and that's probably why it's important to get the quick wins as well, because that takes less you don't need a whole bunch of user stories and user journeys going through the process. If you can if you can knock out a quick win, and while that's going on in the background, you can have another team member start work on user journeys for maybe some of the bigger higher risk items or ones that are more complex.

SPEAKER_01

Exactly.

unknown

Cool.

SPEAKER_00

All right, so we are we good here on the on this step? What's what's the next one here?

SPEAKER_01

We're good. So now we've come to step number five. And this is actually where it gets really fun because this is prototyping, this is building, and this is usually where most people start. So what we covered up until this point, for a lot of people, that's kind of like the like the boring stuff, right? Especially as engineers, we're builders. Like we don't like to do interviews, we don't like to write documents, we just want to, we just want to build stuff. But it's it's it's very tricky, and you definitely don't want to don't want to neglect this. Um, if you do it for your for yourself, if you do it for other business, it doesn't really matter. Do doing the the the foundations properly really goes a long way. Because now we can actually, or we know actually with with a high degree of certainty that the things we're actually going to build are actually going to move the needle. And what's really important in the prototyping phase, and this gets more and more important the more AI you put into the final solutions, is for a lot of AI solutions, you first need to prove if it's actually good enough, whether you can actually do it. Now, if you compare this with more traditional software, let's say someone asks you, hey, can you build me a website? If you know how to build websites, you can say with 100% certainty, I can build you a website. And if you click on this button, you will go to the next page. And if you will this, that will happen, right? It's deterministic software. We can build that, it's good. When you're talking about AI systems, we've done a lot of work in customer care, for example. You can't in the beginning, you guys, look, yeah, we could just throw all the tickets at an LLM, we have the knowledge base, and we just let the AI handle everything. But we don't really know for sure because what types of questions are people asking? What's the how complex are the questions? Is the AI actually capable enough of understanding everything? And usually the AI is capable enough, but is the engineer building a system capable enough of engineering the context in such a way that the AI can handle it? So this is well what while uh this is where you need to go through like a proof of concept phase. So usually that involves making like a small fast build where you throw or put limited data through, where you get to kind of like a benchmark level that you can show to the customer, hey, look, this is the starting point. It's not perfect yet. Probably not all of the cases that you want to cover, again, especially if you introduce more and more AI, you probably have edge cases that you need to like monitor. Not everything is going to be 100% correct out of the box. That's one of the most trickiest things when building AI systems, managing those expectations. Some customers come in, they've been building systems and automating with agencies for years already, but mostly deterministic software. Now they work with an AI agency and they expect that same level of kind of like rigor and reliability out of the box. And then we come back to like discovery prioritization, picking the cases where you have a higher likelihood of making sure that everything is correct. But that's that's the whole goal of the prototype. Make sure that you can actually do it. Sometimes it's really simple, sometimes the prototype is almost already almost the final build, right? If we're talking about taking data from a spreadsheet, putting it into the CRM, making a summary, and adding that to another system. By now we know we can do that. We maybe need to tweak with the prompt a little bit, but we can do that. If we're talking about a whole like document analysis process where we need to let where we need to check documents for a specific set of like domain-specific rules against some kind of like knowledge base, we need to test it first. So that's prototyping. It's building, it's proving, and then showcasing what's possible.

SPEAKER_00

Yeah, and that's why I think it's really important when you go back to the discovery process and you ask questions, especially around errors. So we say, what errors typically happen when you put in data into this file or format or whatever's going on? And if you can gather a number of different error use cases, and then you could say, whether you're using like uh we have like AI evaluations inside of N8N that you can run, you can actually run a test data set through with the typical errors and along with a couple more common use cases, and you can see how how well the AI do with it. So if we can identify those typical error patterns, run them through the AI model, then we'll have more confidence because we either need to fix the prompt or fix the language model or whatever we're gonna be doing to be able to have that perform correctly. But that's why that's important the laying the foundation is correct, um, a good procedure to be able to get you to get that output that you want. So 100% makes a ton of sense on prototyping for step five. Uh, what's what's step six?

SPEAKER_01

Step six, that's adding your safeguards. So this goes kind of like you correctly uh or directly follow this from the proof of concept uh phase where you build something and now you're going to pull uh put data through it, exactly like you just explained. This is super important, and again, sometimes this is really tricky to um uh for for clients also to understand this. So it's also this iterative and collaborative uh process. As an outside service provider, you have so much things that you know, so much things that you can ask, but you often also don't have access to all of the data, right? So you can ask for kind of like a sample data set. What are typical documents you process? What are typical inquiries you get that need to be answered? But you always start with a subset, that's where you build the prototype on. So then it's about pulling more and more data through it, understanding where it goes wrong. Often, this is also where you need to have the client in the loop because they know what good looks like. For you as an outsider, sometimes that not as that's not as easy to uh to validate. And then what you want to do is you want to identify the common failure modes and build the safeguards around that. That's the whole purpose of uh of step number six. So, an example of this uh would be what we often we've done a lot of work in uh customer care, so automating support tickets. And always what we do is we we take the data from like let's say last month, let's uh six 60 days or something like that, and we we categorize them with AI as well. To say, for example, in e-commerce, you have 80% of the tickets, is like where's my order? Uh, are there any delays? Well, where's my why's my package not here? Right? So that's like a big chunk of the work, which can then be automated by integrating it with like a shipping API to track the status and having uh like a knowledge base behind that with some rules. Now, if we can correctly identify those types of questions programmatically, and AI is really good at that, just classification, what type of category ticket is this? We can then start to build an automation in a certain pathway where we say, look, everything that comes in, we escalate to humans, but if it's where's my order, we can automate it via AI. Now that's a very like simple example, but it is something you ideally want to apply to all problems. Like identify what's the 8020, what's the what's the big chunk of the work that I can already safely automate while building gates and safeguards for the other types of data that flow through the system. And then the safeguards, what they mean is it depends on the process, but usually that is escalating to uh humans for them to review it. So letting the human handle that or um solving it in a way that currently uh how the process is currently set up, right? So that's step number six, the safeguarding, controls, monitoring, and all of the fallbacks that you need in your system.

SPEAKER_00

Yeah, I can see that because I know in NADN we have a classification node that allows us to be able to classify things, and then we can also have a human loop say, okay, if it if it falls under these other 20% of, you know, can you give me a discount or I'm really mad or some other issue that could be high risk, then automatically send a human loop verification or notification, and that allows the human to be able to say, okay, I I see this, let me let me have a delicate touch, which is also great because then on top of that, you can also take that data that's coming through with the human loop node, and you can use that to come back and retrain the AI enough time, say, hey, we've been able to identify this edge case, or we put guardrails on it. The humans answer this stuff enough times that we'll be able to find patterns and then roll that back into the AI classification.

SPEAKER_01

Exactly. That's that's the pattern. So those are very useful notes in NA10 as well to do. This is exactly what we follow the same principles.

SPEAKER_00

Beautiful. Okay, we're we're we we've got this. We've put guardrails on. Let's bring it over to the next step here.

SPEAKER_01

We're getting there. Final step number seven, launching. So this is where we can now actually go live with a solution, right? So very important. We created a prototype, we tested, okay, look, we can actually do this. Client says, yes, this is good enough. Maybe there may have been some iteration in there, and we put a proper safeguard on there because we know it's not gonna get everything right out of the box. Again, if it's a simple process, if it's deterministic, that's often possible. But for a lot of for almost all the projects that we work on, you want to have those safeguards and you want to safely be able to kind of like uh delegate them or escalate them so that you can go live. So launching is then all about how can I as quickly as possible start to already add value? Because that's sometimes one of the trickier things. An automation, we've we've gone through the entire process, and depending on the size, the team, what you're doing, up until this point, no value has been added, but a lot of work has been done. Invoices may have already been sent, people have spent time on it, but there's no actual value just yet. The value comes, of course, after launching when you put the system into production, into operations, and it's now saving time, generating more revenue, reducing manual errors. That's where we actually can start to go back to our ROI calculation and see, hey, this is great. Now, the most important thing during launch is getting your clear on the success metrics. So, how do we evaluate the system? So, for example, with a customer support system, we could um we could, for example, monitor the accuracy rate. So, of all the tickets that AI touched and handled, how many of those tickets were actually correctly handled? So, again, we need the client in the loop there. They like need to monitor this in the beginning and they need to say, look, here are 10 tickets that the AI touched and automated. These nine, totally great. This last one, this is not really good. This is ideally not something that we would have shared with the customer. Okay, that's a lesson learned. Now we feed that back into the system, but you want a way uh to monitor that so you can start to see what the current kind of like KPI or benchmark is, so you can continuously start to improve the system over time. So that is with the automation that is in place right now, and then in parallel to that, you can also start to tackle some of those trickier problems that we now created the safeguards for, right? So, for example, you mentioned asking for the discount, but you know what? That could be a subprocess on its own where we say, look, we're now going to tackle this. There are some prompts in there, we have some data, let's let's figure this out. So then you go into solving those sequentially, really, but you need to know the success metrics after the launch in order to build from there. And that is pretty much kind of like the first process, the seven steps to get to launch. Now, post-delivery optimization, that's a whole different talk that we can get into monitoring, maintenance, of course, but this is really how it gets started. How you go can go from an idea, I want to automate something to launching and actually generating the value.

SPEAKER_00

I can absolutely see that around also when you're talking about success metrics, identifying in the beginning, we're talking about mapping as is versus desired outcome with automation. If we go to desired outcome with automation, asking that question there in the beginning, okay, if we're gonna tackle number one before the desired outcome, what are some of the success metrics? How do we know that this automation is successful so that when we go to wrap that back into the launch at the end, we already know that that's something that we're looking towards. And then maybe setting a little expectations when it comes into it, say great, when we're gonna do launch, we're gonna have you, we're gonna have this run on a set of 10, 100, whatever it might be. We're gonna save that into some sort of knowledge base or a spreadsheet, and then we're gonna have you go through it and just confirm what we have. And then we can also tie that back up and say, okay, we've done 80, 90 correct. This has saved X amount of time. This is the value perceived, and now we're able to then tackle this last 20%. So then we can say, cool, is this good enough as is with this 90% confidence? Um, or do you want to iterate on it, or do you want to move on to another automation? So you can kind of go go deeper or go then go or go wide. That sounds like a possible path. Exactly, exactly. Amazing, amazing. Uh Dave, is there anything else? This is we've gone through the seven steps. Is there anything else you'd like to let people know about?

SPEAKER_01

Um what I yeah, I I want to again stress the importance, especially from a service provider perspective, if you're a freelancer, a freelancer running your own AI agency, is the importance of letting your stakeholders know that if we want to do things with AI, if we want to automate, explain that we don't get to 100% immediately. Usually you start at 70, 80, and that they also need to be actively involved and they need to allocate time for this in order to review the system and give feedback. What a lot of times happens is it's a mismatch in expectations where they think, oh, I'm going to an external service provider. I don't have time to do this, I don't know how to do this, which is fair, of course. That's why we step in as an agency or a freelancer. But then thinking they could just say, like, hey, here is everything, go and automate everything, and then ping me when it's done, and we start running. For deterministic systems, yes, that's possible. You can do that. The more AI you put into the mix, the harder it's going to get. And you need to be aware of that. And you need to also consider that in your um in your timelines, in your costs, also, not only the uh upfront costs that you need to invest in, for example, paying the agency or the service provider, but also allocating time and resources internally from people that need to stay in the loop, that need to act as that human in the loop, that can then feed data and feedback back to the developers who can then implement it. That's the that's the point where you get you get most of the issues and mismatching expectations I've noticed over the past years, really working on these projects. So now we're very upfront about that whenever we start an engagement.

SPEAKER_00

I'm just saying, I yeah, and I can imagine that in front loan that in the very beginning, especially around the desired outcome around one of the automations say here just to match expectations. Part of this is gonna be a collaborative process. We will do all of the technical heavy lifting, we're gonna get it built out based on what you want and what you need. We need your input and your guidance. So this will be a time investment, and this will take to iterate to make sure it's as good enough to the standards that you want. And and at the same time with that, just know that every hour invested into showing up to these meetings is gonna mean that forever, once we nail this thing down, you'll be able to save time going onwards from here on out. So you have to, I think I'd say set a bit of the expectations, but then also let them know it's not just about the grind, it's about the fact you're investing into automating your business that will then free up time in the long run.

SPEAKER_01

Exactly. Exactly.

SPEAKER_00

Dave, this has been great. We've gone through the seven steps of integrating AI and automations inside of these businesses. If people want to find more of you, how do they do that?

SPEAKER_01

Yeah, best place to start is on YouTube. So you can just go there, search for my own name, Dave Evelar. You'll find my channel. I make technical videos, uh, a lot of like Python-related uh tutorials and courses really to build custom AI and solutions. If you've never written a single line of code before, I also have a complete like five-hour course on YouTube completely for free on how to go from like having never written a single line of code to building pretty cool things with Python. So if you're into that, you can explore it. Um, if you're already a little bit more technical, there's a whole lot of AI engineering videos where I go deeper into some of the things that I explained in this podcast.

SPEAKER_00

Fantastic. Dave, have a beautiful and blessed day, my friend. Much love, and I'll see you on the other side. Bye now.

SPEAKER_01

Thanks for having me, Dylan. See ya.