The n8n Masterclass

Building an Open Claw Clone in n8n | Full Walkthrough and Template

• Dylan Watkins - n8n

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 30:32

🎁  Download n8nClaw Template Here - https://go.n8n.io/n8nClaw

Open Claw has taken the AI world by storm with its 24/7 autonomous assistant capabilities, but it's a black box. In this episode, I sit down with Shabbir, an n8n community member who rebuilt Open Claw's core functionality entirely inside N8N—where you can see, customize, and control every single component.

What You'll Learn:
• Multi-channel messaging (Telegram, WhatsApp, Slack) with persistent conversations
• Tiered AI agent system (Haiku → Sonnet → Opus) for cost and speed optimization
• Autonomous task management with scheduled heartbeat triggers
• Document management with Google Drive integration
• Email manager for reading, processing, and sending emails
• Vector memory system for long-term context retention
• Research agents with web search and Wikipedia integration
• Live demos showing real-world usage

Key Features:
✅ Open source and fully customizable
✅ Visual workflow you can actually understand
✅ Persistent memory across multiple platforms
✅ Autonomous execution without constant supervision
✅ Domain-specific tool extensions
✅ Complete template available for download

Whether you're curious about Open Claw or want to build your own AI assistant with full control, this walkthrough shows you exactly how it's done. We cover the architecture, test it live, and discuss future evolution ideas including recursive agent triggers and local file management.

Shabbir's Youtube Channel https://www.youtube.com/ ⁨@ShabNoorAI⁊  

Timestamps
00:00 - Introduction to N8N Open Claw Clone
01:19 - Why Open Claw is Popular & What We're Building
02:30 - Live Demo Setup & Overview
03:14 - Multi-Channel Messaging Explained
04:47 - Voice Notes, PDFs & Image Analysis
07:43 - Persistent Memory System
09:31 - Tiered AI Agents (Haiku → Sonnet → Opus)
11:30 - Task Management & Autonomous Execution
13:35 - Heartbeat Scheduling & Proactive Check-ins
14:38 - Google Drive Document Manager
18:46 - Email Manager Integration
20:29 - Research Agents with Wikipedia & Web Search
22:50 - Vector Store for Long-Term Memory
25:16 - Live Test: Sending Checklist via Email
26:05 - Future Evolution Ideas & Customization
27:07 - Recursive Agent Triggers Discussion
28:58 - SSH & Local File Writing Capabilities
29:33 - Template Availability & Final Insights

Get 30% Off n8n Cloud Starter or Pro Plans!
Want to get started with n8n? Visit n8n.io/pricing and use code 2025-N8N-PODCAST-729C416E at checkout for 30% off your first month or year.

SPEAKER_01

OpenClaw has been blowing up. Everyone's talking about having their own 24-7 AI assistant, one that remembers everything, works across Telegram, WhatsApp, Slack, and can actually execute tasks autonomously. But here's the thing, it's a black box. You install it, you hope it works. And if you want to customize it, you're digging through the code. But what if you could rebuild the core of OpenClock inside of N8N? Where you can see every node, every connection, every decision the agent makes. You can swap out models, add in your own tools, plug in your own integrations, and actually understand what's happening under the hood. That's exactly what an N8N community member did. Today I'm joined with Shabir, who built a lightweight OpenClock clone entirely inside of N8N. Multi-channel messaging, persistent memory, tiered AI agents that scale from fast and cheap to full reading powered depending on the task. Document management, email, research agents with better memory, the whole stack. And I've got it up and running on my site. Hooked it to Telegram, and we're gonna walk through the entire thing live. If you've been curious about OpenClaw, but wanted something you can actually see, customize, and own, this is it. And the template is gonna be made available for you to download and set it up yourself. Let's dive into it, shall we? Shabir, welcome to the podcast. Super excited to talk to you. So what are we building today?

SPEAKER_00

So we are building a, and first off, thank you so much for having me on. It's uh it's an honor. Um we are building a lightweight clone of the viral open claw in N8N.

SPEAKER_01

Amazing. Um, really excited to dive into this. I know you've put a lot of work into it. I have it built on my side from our talk yesterday, went ahead, spun it up, got it connected, and I have it hooked up to the system. So let's go into it, have you walk me through it, we'll do some demos with it, show some use cases, and we'll talk about where it's at and where it could possibly go. Sure, sounds exciting. Amazing. Let's dive into it. So I'm gonna share my screen and we are gonna open this up. Three, two, one. Here we go. All right. So on the left hand side, I have Telegram, which is connected over here to the workflow that you've built out. And I got this up and running. It maybe took me about uh, I'd say 30 minutes to get this up and rolling with everything in place. But let's let's let's look at this. Um first we have the entire hierarchy here of the overall workflow like this. But let's uh let's start front to back and walk me through uh how a bit of it works inside of here. And so um let's start here and we will go through each of the sections.

SPEAKER_00

For sure. So um OpenClaw is uh is really well open claw for for three particular reasons. Uh reason number one is that you can talk to it from multiple channels and it always picks up where you left off. Reason number two is um that it can sort of execute tasks with um autonomy. And number three is that it has a very persistent memory, remembers basically everything about you. So um I got to thinking about that and I figured, all right, so most of these things should be very doable in N8N, and that's what we're looking at uh up here. So the user profile lookup is basically where that persistence and initial knowledge comes in. So it's basically just a uh an 8-end data table that's initialized the first time uh that you that you set this up. And you you probably did that when you when you fired it up for the first time, where it asks you a few questions, it asks you for a username, it asks you a little bit about yourself and what you're expecting from the bot. And that's all configurable through the system prompt. Um so every time uh a trigger comes in, it um it pulls this data from the uh from the data table, and that is fed in to the system prompt. So any interaction that you do with that agent, um, it all it always has that bit of background about you. And that's also uh the the data table fields are are also living, uh so to speak, meaning the system prompt in the main agent has instructions that if it finds a new relevant detail that it feels should be added permanently, it can actually go ahead and update those fields and make those changes.

SPEAKER_01

Yeah. So inside of the data tables roles, we have uh a data tables for the initials, we have users information. So it can add more information about the user inside of there or anything else relevant that it finds.

SPEAKER_02

Correct.

SPEAKER_01

And then it looks like over here with these set nodes here, you're standardizing the inputs. And you talk about this, the the input has user messages, system prompt details, and last channel.

SPEAKER_00

Yeah. So uh because we want to be able to handle um inputs from multiple sources, using the set fields to standardize the variable names right before it goes into the AI agent. So you can just have a single variable name, and depending on whichever set fields node fires, um the data is always going to go through into the node. Uh so we have three variables going. The user message is you know the request that you're making, whatever that your your chat input or whatever that may be. Um the system prompt details is what it pulls from the data tables that we just spoke about. That's the the background of who the agent is and who you are. Um and the last channel is actually a static field that's saved into the data table, just for the agent to remember that um this is the this is the channel that it has to reply to. So in this demo, I've got WhatsApp and Telegram set up, but hypothetically, you could add Slack, you could add Microsoft Teams and N number of triggers, right? And you just have to copy paste um and just update that field accordingly. So hypothetically, you would have you know five different platforms and and you could just switch from A, B, C, D, E, and just keep keep the conversation going across multiple platforms.

unknown

Yeah.

SPEAKER_01

And I know with that, because I was setting this up, that when you set this up, you needed to bring in a community node that I believe was called like evolutions-ai or something like that that allows you to access the WhatsApp because it's a community node. Yeah. Okay. So if you're setting this up, you need to integrate that or go into the community nodes and then add that node inside of there.

SPEAKER_00

Yeah, unless you have the WhatsApp business cloud set up, but you know, it's it's kind of a pain to set that up. So if you don't get verified, this is kind of like a uh a quote unquote unofficial way to get going with WhatsApp.

SPEAKER_01

Got it. So inside of here, you have um all of the all the data coming in uh through the the different channels, right? And uh one um other thing we did inside of here that I thought was really interesting is over here with this different types of analysis that you have through uh I believe this is Gemini. Can you talk to me a bit about what's going on inside of here?

SPEAKER_00

Yeah, so uh OpenClaw does support like multiple types of input. You can send it voice notes, you can send it PDF files, you can send it images. So I've uh as a proof of concept, I set that up in in Telegram where you can send a voice note. And if it's a voice note, it the the switch node catches that, it downloads the file and it sends it to Gemini for analysis. Same thing for images and the same thing for uh PDFs. I just prefer using Gemini because um the Gemini node is uh it it handles all of this very gracefully. A single node just takes the entire input and gives you an output right away, so you don't have to fumble with multiple nodes.

SPEAKER_01

I believe you can also do uh Gemini also analyzes videos, I'm pretty sure. So you can also do if you want to send in a video message as well, we could always do that as an upgrade to the system later on.

SPEAKER_00

Yes, for sure.

SPEAKER_01

All right, and so all of that's coming through. It's updating it here into these rows. What are we what is this getting put into here? This is getting put into the initial table.

unknown

All right.

SPEAKER_00

This is just grabbing that information, the the context.

SPEAKER_01

Got it. And then putting in setting the information and then passing this over to our innaten claw right here. Yes. Okay. So then let's talk a little bit about this uh inside of here. Maybe we'll look at the uh the core agent in terms of what we have in place. So right here is the prompt for the innate and claw. Let's talk us through it.

SPEAKER_00

Yeah. So the the top part of the prompt is uh just initialize initializing the identity of the agent. So we're telling it, okay, you are N8N Claw, you're advanced, you're proactive, you have multiple tools, and we're passing in that initial functionality from what we got from the data tables, which is the identity. So the the agent's identity and the basic information about the user. Um and right up top, we're also uh just telling the agent that the user field is a living document, meaning throughout the course of your interaction with NAT and Claw, it's possible that you know um your strategy changes or your priorities change or something about what you want from the agent is going to change. So it's a living document, meaning um if NAT and Claw feels that okay, the current user information is now outdated, and now what the user is asking from me is a little different, it can actually go ahead and update the user field with that new information so it always stays current.

unknown

Got it.

SPEAKER_01

Yes. Persistent. All right, fantastic. And inside of here, you have access, you're saying you have access to email, you're doing research. I do like what you did with the research agents, um, which I'll I'll I want to dive into with this this um multiple agents that you have the ability to spin up based upon level of difficulty, which I think is a really good use case, and really good for the the tokens uh to maximize the usage, which is basically as it gets to a harder level, it can then spin up a more powerful LLM. Yes. All right, and then the documents agent, which I think was great and really useful. I think this is this is my one of my favorite things that you put in place was this document agents that you that you put inside here. So let's let's look at this real quick. We go here. And uh so this is the core section inside of here. Going down here to the bottom, um, you have these three agents in place. All right. Do you want to talk me through these ones? And that this is what we're just talking about, the level of difficulty. So I'm gonna open up the first one here.

SPEAKER_00

So uh this is basically a blank slate where the system prompt and the user prompt are both defined by the orchestrator. Um, but the only difference is that the three worker agents have um three different models connected to them. So the the first one has, I believe, uh Claude Haiku, uh, which is like their fastest, but a slightly um lower intensity model. The second one has uh Sonnet, and the third one has Cloud Opus. Um and really, you know, when you're customizing this, you can you can use your uh you know your own preferred LLM providers. Um the only thing I would suggest is just just go uh progress from um like faster and less effective models to more intense models from agent one up to agent three. And the the thought process behind this was uh not all tasks would require that level of thinking, right? So um to save on tokens and and more importantly to save on time, right? Because the the the uh the lower level models are typically much faster and they can they can turn around tasks a lot uh a lot quicker. So both time and money savings would the agent can split uh the delegation.

SPEAKER_01

Let's show a little test here. So if I go inside of here and I go into here, could you create a task about the podcast pipeline to help me with show notes? It's gonna check with I'm pretty sure it's gonna hit this first one here because it seems to be a relatively quick uh use case. And what I liked about this one was the speed in which it came back. Look, task created, I've added show notes automation to your task lists. Would you like me to start working on this now, or can I help you with something else? And starts to go through and starts to look at what I asked for, which I said help me with my podcast. And so this was um pretty quick, and I'm pretty sure it hit the haiku for the response. But it went inside there, and I'm pretty sure what it did is it added it inside of here. Is this correct?

SPEAKER_00

Yes. And that's really the uh the half of rather the um half of the the brain behind NATN claw, where it can keep track of tasks and subtasks within those in order to remember what it is that you've asked it to do and how much of that it's completed. So if we if we go back to the triggers on the left side, there's a schedule trigger um that basically just has a hard-coded prompt in it. Um and that hard-coded prompt is check the tasks, see what's still pending, and start working on it.

SPEAKER_01

Yeah. And uh I could imagine too, if we wanted to create some sort of front-end visual interface, like a um Trello board type of thing to be able to manage these tasks, we could have a visual indicator where we could see these things being moved across visually. So it's not just inside of Telegram in here, you have inside of a project management platform. And so we could use this as more of the back-end data sources to be able to populate that.

SPEAKER_00

Yeah. And if you already have something like Asana or Notion set up, um, you could just connect uh you know, through an MCP or or through the native nodes and just have OpenClaud, sorry, have NN Claud directly hook up into that. Though it is a little risky because it might end up messing up stuff that uh you know you've you've already put a lot of hard work into. So keeping it abstract and and it's in its own silo uh allows you to get the work done and still have have it separate from you know your main files.

SPEAKER_01

Yeah, and we can also do an approval system as well where it says, hey, I'm gonna move these things over from to do to done, and we click approve if we wanted that as well. If we want to put a little human-the-loop intervention in place so it doesn't go through and have some absent. So it's great. So in here you could you could you can insert, upsert, adjust new tasks. You have the subtasks, uh, get them, get tasks, initialize info and date, and then the heartbeat, which is the heartbeat is one of the things that's most interesting with open claw is this ability to kind of heartbeat check in on a proactive basis, scheduling up to kind of see after any pending tasks, what's going on, uh, so that it can kind of more or less live autonomously without you. I love it. That's great. So these are the agents. We've got this, and then down here, which is what I really enjoyed, is this right here because uh being able to give the the credentials to these these documents and to a Google Drive folder, and what I did is I gave this uh in it in claw its own Gmail account and gave it uh went through the Google Cloud platform and uh just added in the ability to add in the document creation and the file management essentially inside of Google Drive. And that's what it did originally for um another section in here, because I could I could go inside of here and say something like I want you to create a checklist of possible items for the podcast production and put it inside a Google Docs folder, uh Google Documents, and then upload that to Google Drive and then let me know when that is ready. All right, there we go. The power of whisper flow. Okay. So uh inside of here, just talk to me a little bit about this and uh thoughts behind it and possible evolutions with this.

SPEAKER_00

Yeah. So um when when I was initially playing around with OpenClub, one of the one of the main issues that I initially ran into was anytime it gave me an output, it would just save it as a markdown file to uh, you know, I was using a VPS to do it. So it would just save it to some folder on the VPS as a markdown file. Uh and then I'd actually have to ask it to retrieve that file and and and give it to me through Telegram or something. Um so anytime we're asking um the agent to do some work, it has to persist somewhere. And there has to be some kind of output outside of the chat. So that's where the document manager comes in. Where any any work that it does for you actually gets saved onto a more permanent, uh more permanent platform.

SPEAKER_01

Got it. Okay, cool. And then in terms of upgrading this, what do you think around like uh evolutions of this? Where would you like to see this go in the with the document manager?

SPEAKER_00

Um so right now we're using Google Drive, right? So hypothetically you could connect it to uh to OneDrive or any kind of um any kind of platform that you're using to store your data, right? This could you could replace all of this with Notion or Obsidian or or whatever that you like to use.

SPEAKER_01

Got it. Yeah, okay. So then upgrade it based upon your own knowledge base.

SPEAKER_00

Yeah.

SPEAKER_01

That makes sense. I don't know if this came through. So maybe we'll maybe we'll try this again. It might still be processing if you check the executions. Oh, let's see.

SPEAKER_02

Let's check it. Oh, oh, we got an we got an error, everybody.

SPEAKER_01

We got an error. Real time bugs. Let's see what's going on in subtask. Oh, I don't know. Let's see here. Problems in the nodes, upsert subtask.

SPEAKER_02

Okay, let's see here. Maybe uh let's go inside of here. Maybe we'll do some real time debugging, but okay, well maybe I had an error inside of here.

SPEAKER_01

We can always come back to this later, but that was the issue.

SPEAKER_02

Okay. Let's see. Yeah.

SPEAKER_01

Validation error. Value Dylan does not match a column type. Oh, okay, so I need to add that column type inside of there. Okay, well let's see. I'll see this. Uh I got this error. Can you please add a new subtask without this being included?

SPEAKER_02

Let's see.

SPEAKER_01

Let's we're gonna we're gonna test the the ranges of this system and we're gonna we're gonna see how how well it can do inside of here. So right now it's running in the background. We can see the execution's running right now, and uh we'll see how long this takes to actually complete, or if we get another error. Oh, we get another error. All right, so it looks like we've got some more work. I know this was a proof of concept, and so we'll uh go back and fix it, but I think we have to go inside the data tables and then add that column inside of there.

SPEAKER_02

Yeah.

SPEAKER_01

All right, maybe I'll add uh yeah, the ability to do that. Okay. I know we can I know we can add an append to it. I don't know if we can change the actual columns inside of data tables. I don't think so. We can do that. Uh, I think I have to do that manually. But okay, cool. So we have that inside of here. We have this document management system going on. And now we have the email manager inside of here as well. And so um, in here you have the the all these nodes. I know you can um also do this possibly with like MCPs, but talk to me through the process of setting this up.

SPEAKER_00

Yeah, so ideally um you would want to, as you mentioned, you uh set up a dedicated Google account for uh for your NitN claw instance. Um and and that way you can kind of give it free reign in the inbox to delete messages, to move messages, and and even to actually go ahead and send them instead of just saving them as drafts. And there's also an email trigger where um if it receives an email from you or from anybody indeed, it can actually read that, process it, and then act upon it. And the reason that we've just kept I've just kept it as a separate subagent is not to overcrowd the main agent with too many different instructions. So all it just knows that okay, there's a sub agent that I need to delegate this to. And and that just keeps the context clean.

SPEAKER_01

Yeah. And also now I'm thinking about it, one thing that could be useful is if we uh had the error node, um, if this if an error happens to trigger that and then send that back through uh through the chain of telegram or or wherever it's coming through to say, hey, this error happened inside of here just to let you know what's going on. So I'm not waiting and wondering inside of here. I'll actually have a clear understanding of like what happened, why it happened, and what's going on.

SPEAKER_00

Yeah, for sure.

unknown

Okay.

SPEAKER_00

So you could just add an error output, use the same switch logic and and just uh hook it up again.

SPEAKER_02

So okay, cool.

SPEAKER_01

And yeah, and so inside of here we can send these emails through here. And then of course we have the researching ability that you have hooked up inside here. I got uh I got rid of your tabli. There was a tably node. I got rid of it because I didn't want to set it up uh just for the to get this ready for the podcast. Um so right now you have uh research agents right here, uh in inside of here, um, and then you have a vector database. Talk to me about what's going on over here in this section.

unknown

Yeah.

SPEAKER_00

So uh the research agent is basically um anytime it has to access external information or dive deeper into a topic. Um so the there's the Wikipedia tool, of course, which is which is free and just you can just connect it up right away. Um I did have Tavily set up on mine, and Tavily is uh basically like a search API where you can give it queries and it will return web search results to you. Um you could even use perplexity if you wanted, or or any combination of those, right? Depending on how deep or in-depth you want that research to be. Uh so the idea is that um anytime you give it an advanced task, and it the NNN claw might end up actually using all of the nodes available to it. So it'll first use the research node to get information, and then it might ask the worker node to process that, and then it'll ask the documentation node to actually finalize that into a document that's ready to go.

SPEAKER_01

Got it, yeah. So it can access the different tools based upon you know what's needed inside here. I mean, I can imagine perplexity as well, and uh and a couple of other ones that would make sense.

unknown

Yeah.

SPEAKER_01

Uh and in in terms of the vector store, uh persistent memory and everything that you have set up in there with using both Postgres and and the vector store, talk to me a bit about the logic behind this.

SPEAKER_00

Yeah. So um OpenClaw can handle uh persistence very well. Like you can you can pick up a conversation after hours and it and it just remembers what you said last and it keeps going. So typically when we set up NADN workflows, usually the uh the memory node, in this case the Postgres node, has a key for which N accesses the previous messages, right? And that key is usually a variable. It'll be the number that messaged it or any any any kind of dynamic value. So to kind of get around that, what I've done here is I've just hard-coded the key to the username that you set up when you initialize NITN claw for the first time. So this way, anytime that you message it, it's gonna look at that key and just look up the previous n number of messages from that hard-coded key. So anytime you message it and from any platform, more importantly, um it's just gonna know what was set last and and it's gonna be able to continue that formalization with you.

SPEAKER_01

Got it. Okay, yeah. So it'll know what channel and the and is coming from you so it can respond with the right context.

SPEAKER_00

Yeah.

SPEAKER_01

Okay, great. Okay.

SPEAKER_00

Yeah. So the the Postgres store is for short-term memory and the the vector store is for long-term memory. Because the Postgres only handles like I I right now I've set it up for the previous 15 messages. And you can really play around with this uh with that number as much as you want. Um, but I've set it for 15. It's it was I decided it was enough for my use. Um and then there's a schedule trigger that actually pings the vector store, uh, sorry, pings the uh Postgres database, uh grabs all of the uh messages from that last period of time, summarizes them, and then uploads that to the vector store, which the main agent can then access whenever it needs to.

SPEAKER_01

Got it. So yeah, so this is this is holding kind of the long-term memory for context of anything that's long running. And then this is for short terms, last 15 messages back and forth. Yeah. Great. All right.

unknown

Nice.

SPEAKER_01

And I'm sure we could always like update or upgrade the vector memory for like meta tag filters and re-ranking, things like that to kind of give it uh more accurate memories if there's something needed. Yes, for sure. Okay, okay, great. And then inside of here, this is the uh in record keeping and ingesting when talking through this process.

SPEAKER_02

Yeah.

SPEAKER_00

So yeah, this is where it it um it pings the uh the Postgres database. It pulls the last number of messages and it just chunks them and uploads them to the to the vector store. And the data table keeps track of the last ID that was updated that was updated. So you don't end up um with duplicate entries.

SPEAKER_02

Uh yeah, goes into the initial table. Right, all right.

SPEAKER_01

So I think this is a run-through of the entire system inside of here. Um I do want I want to do one more test. Is um can you take the checklist of ideas for the podcast and send an email to Dylan.watkins at n8n.io? Okay.

SPEAKER_02

Alright, let's just take a look at this execution history. Let's see how this thing goes. Alright, it's running through right now. Going through on the back side. Yes, I had the upsert subtast issues.

SPEAKER_01

And let's see what it does. It's being processed. And now we can see it has been succeeded. Cool. It's in a list of the possible things I can do, and it sent it to my email. And uh let's see here. If I open up my email on my side to confirm that it did come through. Oh, yep, there it is. I have all of my notes summarized right inside of here. So there we go. All right, beautiful. Maybe do a little formatting on that, but overall worked well. Gonna keep the append N8N inside of there. All right. Fantastic. Now, in terms of future rollouts of this, what are your thoughts around evolutions of this? Like if someone was to take this and and take it to the next level, pass the baton to you, what are some things that they could do?

SPEAKER_00

Sure. So um like the beauty of using N8N for this build is um since you can see everything that's under the hood, you can really add domain-specific tools to the main agent. So let's say um you wanted to set this up specifically for running your cold email campaigns, um, then you can just add a bunch of tools that help you do that, right? Um, you can keep this is the base skeleton, is still there, and you just go ahead and add those extra subagents with those tools and update your system instructions with like a few more lines to be able to access that. So the idea behind building it this way was to keep it as flexible as possible. So um any kind of uh domain expertise that you need or any specific tools that you need to give it access to, all you need to do is just create that one silo and and and you're there.

SPEAKER_01

Great. And I also know we met up with Angel yesterday from the NADN team, and we were talking with him about uh some things that we could do. Are there any things that uh he talked about that stood out to you?

SPEAKER_00

Yeah, so um Angel actually had a uh a pretty awesome idea about making the agent recursive to a certain extent where the agent could create its own schedule triggers. So right now the schedule trigger is hard-coded. You'd have to set the interval at at which it runs. But hypothetically, um you could have a sub-workflow that that executes it um and and sets the variable for the schedule trigger. And that way the agent can then trigger itself to run one more time, which is um it's uh it's kind of mind-bending when you think about it.

SPEAKER_01

But um, it creates the action that we like is this proactiveness of it, of being able to trigger itself, which is uh an incredible concept on it. And that's what the whole beauty of this and the whole beauty of building it within it in is that we can build these systems out, we can pass it as a workflow, as a template to somebody else, they can take it and bring it to the next level. The whole collaboration community coming together to kind of look at this and go, okay, what can I evolve it with? When I one of the things that I was looking at it was the you know, the ability that we can use the um uh command execute node to be able to write files locally, or we can SSH with NADN uh also and to other places. So those are some other evolutions that I thought would be really nice as well, to be able to you know write local files or be able to SSH into some other um uh uh computer somewhere else or uh and be able to take some sort of actions. So those are some things I'm seeing that we can do with this would be great. Uh but overall it's it's very exciting to see the the evolution of this. I think um when people kind of show a new way to build in this whole world of AI and automations, it also exposes it's like the four-minute mile. You see what's possible coming up, and then once you see that possible, then you go, okay, great. Well, how do I rebuild it with the tools that are available to me and maybe stack and build a taller skyscraper? So I think this is this is an incredible thing that you've done. And uh I love the the um what you've built here. And just load, this is a this is a proof of concept for everybody. This is showing you what is possible with it, and and now that the template will be available, because I believe we're gonna be making this template available for anybody to be able to download and use on this. Are there any other insights or things that you want to share about people uh if they're gonna go use this template to build their own version?

SPEAKER_00

Um, I think the the number one insight I would have is uh like with building any workflows out, you just you'll have to be patient when you set it up and you'll have to be prepared to run into errors and debug them. Because you know, the the bigger your system gets, the the more prone it's gonna be to those hiccups when you're starting out. But that's really um, you know, that's the beauty of building with these platforms is uh you know, you get those errors and and you make your systems better and better. So don't lose patience, basically, is is the only insight that I have.

SPEAKER_01

Persistence, 100%. All right. Awesome. Uh Shabir, uh, thank you so much for uh coming on the podcast. It's been an honor and pleasure. Um, how do people find you if they want to see more of the content that you're making?

SPEAKER_00

Yeah, I have a YouTube channel and uh I also have a website. So maybe we can leave a link to both of those in the description.

SPEAKER_01

Sure. We'll put that down below. Uh all right. Well, uh thank you so much. Have a great day, and I will be seeing you on the other side. Take care now. Thank you so much. Thank you. Bye bye. Bye.