The AI Daily Brief podcast introduced a 10 Weekend AI Resolution for 2026, aiming to enhance AI skills through practical projects. The resolution includes tasks such as model mapping, deep research, data analysis, visual reasoning, information pipeline, and automations. Participants are encouraged to engage with AI tools, create visual explanations, analyze data sets, and build information presentation workflows. The podcast also featured sponsors like Super Intelligent, Robots and Pencils, and Blitzie, offering AI solutions and services. This structured approach aims to help individuals develop fluency in AI tools and techniques over weekends throughout the year.
Transcription
5605 Words, 31874 Characters
Today on the AI Daily Brief, start your 2026 off right with the 10 Weekend AI Resolution. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Alright, friends, quick announcements before we dive in. First of all, thank you to today's sponsor Super Intelligent, Robots and Pencils and Blitzy to get an ad-free version of the show, which is just $3 a month. Go to patreon.com/airdlybrief, or you can subscribe on Apple Podcasts to learn about the limited availability I have for speaking engagements, newsletter updates, job updates, sponsorship, etc. check out airdlybrief.ai, and to keep track of the forthcoming AIDB intelligence product, whatever that may end up being, go to aidbintel.com. Alright, friends, we have finally come to the end of our end of year coverage. This show is coming out on the very last day of 2025, and we are firmly now looking towards the future. And as sort of silly as New Year's resolutions can be, it is a good time to reset and build some new skills and head towards the future that you must want for yourself, and my guess is that a lot of you guys are thinking about how AI can be part of that future. So what I decided to do for this final episode of 2025 was put together a 10 Weekend AI Resolution. Call it a self-guided path to AI fluency. Basically, it is a bunch of projects that are going to give you exposure to a wide array of AI tools, which if you have mastered all of them, will certainly put you ahead of the vast vast majority of people on the planet when it comes to using AI. Now, of course, I assume that lots of you will have familiarity with lots of these different tools and patterns, and so feel free to pick and choose which ones seem relevant or simply use them as inspiration. Now, if you want to engage and even share what you have built, check out aidbnewyear.com. Basically, I'm going to vibe code us a website where we can all share what we're doing to the extent that people are interested in that. Again, that'll be at idbnewyear.com. One more note from a process standpoint, I built the outline of all the activities I wanted to see, clawed, put it into a set of slides, and Jen Spark turned it into this presentation. So let's talk about the setup. This is not a course in the sense that one thing builds upon another. It's 10 different projects. Imagine for a weekend, although, of course, you can do it whenever you want, that are practical by default, forcing outputs not theory that are highly completeable, where each project ends with something real, completely modular as in you can skip or do them in any order without derailing anything, although they are also compounding in the sense that if you do do them in sequence, some of the later ones will build on the others. Each weekend includes a clear deliverable, a default project for a beginner or intermediate user, as well as an advanced modifier, and an expectation that they're going to take a couple hours of work to do. If you want to give each project a score, you can rate things on outcome quality, the time you would have saved versus doing it manually, repeatability as in, could you do this again easily, and whether you would actually use this process again in your regular life. If you're keeping notes, think about things like the best prompt, the approach, what didn't work, so that you can do it better the next time. The goal is to build some habits and workflows that you can actually still be using some months from now, although, of course, there's no guarantee that what we'll be doing in six months is anything like what we're doing right at this moment. Now, to get yourself prepped, I suggest taking just about 30 minutes to build a little bit of infrastructure before your first weekend. Each of these weekends should be about doing the work, not setting up accounts and things like that. So you might want to set up an AI resolution folder with subfolders for each weekend wherever you do that, whether it's on your computer directly or whether it's in drive, and then you want to figure out your toolset. Specifically, I would suggest taking a little time to review the automation platforms like Lindy, NADN, and Make, and pick one that feels best, and same with the vibe coding platforms, which for most, I would suggest either replica or lovable, or just use Google AI Studio as there is one vibe coding project that is specifically designed around Google AI Studio, so you might just want to invest there on the whole. With that out of the way, let's talk about your first weekend project. Getting a little bit meta, we are going to vibe code a resolution tracker that is a web app that actually tracks your progress through these 10 weeks. Now, exactly how you make this is up to you. It is likely that you will want a list of all 10 weekends, a completion checkbox, a notes field, a progress bar, maybe if you like that ranking system that I suggested, you add that to it, and anything else that seems relevant to you. For example, if you're interested in tracking the time, you could add that as a field. If you want to upload any content, you could add that as a feature. Now if you're using replica or lovable, you will also be able to deploy this live so that you can actually use it and interact with it. And of course, the last step is to actually use it once you've got it launched. If you want a more advanced version, you can just expand the feature set. For example, adding user authentication for multi-user support, the collaboration mode if you want to get friends and family involved, maybe a shared tracking system, you can optimize it for mobile, or basically do anything else that makes it more robust. We're starting with this, both from a practical standpoint, giving you a tool that can actually help with the rest of the weekends, as well as by giving you something super tangible right from the beginning that will give you a sense of just how powerful these building tools have gotten. We can choose project, we're calling model mapping, building your personal AI topography. This project reflects the fact that one of the easiest ways to get extra value out of AI is to figure out which models you like for different use cases. Most people default to one AI model for everything leaving capability on the table, and the goal here is not to figure out which model is objectively best for a task, but to develop your own instincts for which work best for the tasks that matter to you. To the idea here is to pick a few models that you have access to. If you only have the free versions that's okay, you can still likely do big chunks of work that give you a feel for the different models, and run the same task through each. I'd suggest some combination of a deep research task, a writing task, a business strategy question, maybe some data analysis, and/or visualization. On idbnewier.com, I'll put a few more suggestions for this. As you're comparing, think about things like which model was faster, which asked better questions, which just felt right. When you're done, create a one-page rule of thumb notes document that captures what you found about which model you like for which use case. Now, if you want to make it a little bit more advanced, you can test an additional set of specialized tools that are purpose driven for each particular use case. You could build a more comprehensive matrix than just your one-page rule of thumb, including things like cost. You could test output consistency over time, in other words, run similar tests to number of different times, and see how much variability there is. You could also track how much editing time there is per model, basically what the differential between raw output and your final product is going to be. I know some of you will probably be rolling your eyes at this one for being super simple and basic, but you'd be amazed how much value the average person is leaving on the table just by sticking to one model exclusively. The third weekend's project is to do a deep research sprint. Now, this comes out of the fact that I have seen a lot of chatter recently, as well as a lot of studies that suggest that despite deep research features being available across all of these models for a big chunk of this year, a shockingly low percentage of people have actually used them. This is the capability that most people have heard about, but few have stressed tested. And so, the idea of this weekend is to close the gap between I know AI can theoretically do research, and I trust this enough to make a decision. The goal is to pick a decision you actually need to make, or a research project that actually really matters. It could be competitor analysis, it could be some look at different opportunities, it could be a pricing project, it could be product research. Whatever you pick, the goal is to have this be something real that's actually valuable to you not just hypothetical. From there, pick a deep research tool, and really iterate with it. Push back, ask for disconfirming evidence, don't accept the first output, push it and see how close you can get to something that can actually inform a new decision or a new way of thinking. If you want a slightly more advanced version, you can run the same question through a set of major tools so you can compare and contrast, kind of building off of what we were doing in Weekend 2, or you could also fact check them between each other, either fact checking them manually or using the other models to check the output of each other. My guess is that for those of you who haven't really tried deep research, if you do this, you will start to naturally spot more times that you could actually be using it in your regular work life. Now, at this point as you can tell, we are not with this 10 Weekend AI resolution doing some crazy advanced set of things. We're basically creating context to go pretty deep on the core capabilities that represent a huge part of the work that we can do with AI right now, which brings us to our Weekend 4 project, which is a data analysis project. Once again, this is another capability that a lot of people know about and fewer people have used, and working with data is not just for data analysts or financial analysts. The first step for this is to gather some real data set. Ideally, it would be something from your own life, it could be a bank statement, it could be analytics from some software you use, it could be your Spotify listening history. And if you don't have anything that's particularly interesting for yourself or for your work, there's also tons of public data sets out there, for example on Kaggle which has just tons and tons of data sets that you can download and play around with. Once you've got your data set, use your preferred LLM to propose cleaning steps, i.e. figuring out what's messy, what's missing, or what needs normalization, five to ten useful metrics to calculate, and three hypotheses worth testing. From there, produce a clean data set, a summary table of key metrics, three insights like patterns anomalies or trends you didn't know, and three actions that could be done based on those insights. From there, you can write a one-page insights memo that has that key summarization. Now, if you want to make this more advanced, try not to just analyze but to build a repeatable analysis pipeline. You can try to create a prompt template that you can reuse monthly on updated data or connect your analysis to a live data source. Also as always, you could compare insights from different LLMs on the same data set to get a sense of which you think works best. For Weekend Project 5, we are taking advantage of Nano Banana Pro and ChatchyBT images 1.5. This is our visual reasoning weekend. And we are not just trying to create pretty pictures here, but to get AI to think through the logic of visual communication and explain complex ideas through visual media. The deliverable is an infographic diagram or visual explainer that you'd actually use. So to get this started, pick a concept that genuinely benefits from visualization. It could be a process, a comparison, a framework, a timeline, a system. Now, this doesn't have to be about your work life, but I think that that's where there's going to be the most rich material for this. From there, use your preferred LLM to reason through how it should be visualized. Not just make me an infographic, but what's the right way to visualize this concept? And what are the trade-offs between different approaches? From there, try to generate two alternate designs, a flow chart versus a 2x2 matrix, a timeline versus a cycle diagram, etc. And like I said, if you're using Nano Banana Pro or images 1.5, you can just build it directly there, or if you're not using those for some reason, you could take it to another tool like Canva or Gamma, but I would suggest staying in ChatchyBT images or Nano Banana Pro. From there, apply Visual QA. Is this readable in 5 seconds? Does it have the right amount of text? Are there any artifacts that don't need to be there? Are definitions included where they're needed? Does it have one clear takeaway? Now, if you don't have a concept to visualize, an idea could be to create a visual explainer for your own job or business. What do you actually do explained in one image to someone outside your field? This might actually be harder than it sounds. Alternatively, you could take the data insights from Weekend 4 and visualize them. Then the key findings into a chart or diagram that tells the story. If you want a more advanced version, try to create a visual system rather than just a single image, design a template that can be reused, such as a consistent infographic style for a series or a diagram format for recurring presentations. You could even create a visual pattern library of frameworks you can apply like two by two matrices, process flows, comparison tables, and timelines. The project is done when you can explain the idea faster with the visual than with words. And someone who sees the image gets the point without you having to explain it. Now for the community website that I'm building, I'm going to try to have a gallery feature for projects like this to make it really easy to share what you've created. Today's episode is brought to you by my company, Superintelligent. Superintelligent is an AI planning platform. And right now, as we head into 2026, the big theme that we're seeing among the enterprises that we work with is a real determination to make 2026 a year of scaled AI deployments, not just more pilots and experiments. However, many of our partners are stuck on some AI plateau. It might be issues of governance. It might be issues of data readiness. It might be issues of process mapping. Whatever the case, we're launching a new type of assessment called plateau breaker that, as you probably guessed from that name, is about breaking through AI plateaus. We'll deploy voice agents to collect information and diagnose what the real bottlenecks are that are keeping you on that plateau. From there, we put together a blueprint and an action plan that helps you move right through that plateau into full scale deployment and real ROI. If you're interested in learning more about plateau breaker, shoot us a note, contact at bsuper.ai with plateau in the subject line. Today's episode is brought to you by robots and pencils. When competitive advantage lasts mere moments, speed to value wins the AI race. The big consultancies bury progress under layers of process. Robots and pencils build impacted AI speed. They partner with clients to enhance human potential through AI, modernizing apps, strengthening data pipelines, and accelerating cloud transformation. With AWS certified teams across US, Canada, Europe, and Latin America, clients get local expertise and global scale. And with a laser focus on real outcomes, their solutions help organizers work smarter and serve customers better. They're your nimble, high service alternative to big integrators. Keep your AI vision into value fast. Stay ahead with the partner built for progress. Partner with robots and pencils at robotsandpensals.com/aiDailyBrief. This episode is brought to you by Blitzie, the Enterprise Autonomous Software Development platform with infinite code context. Blitzie uses thousands of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines of code. Enterprise engineering leaders start every development sprint with the Blitzie platform, bringing in their development requirements. The Blitzie platform provides a plan that generates and pre-compiles code for each task. Blitzie delivers 80% plus of the development work autonomously, while providing a guide for the final 20% of human development work required to complete the sprint. Public companies are achieving a 5X engineering velocity increase when incorporating Blitzie as their pre-IDE development tool, pairing it with their coding pilot of choice to bring an AI native SDLC into their org. Visit Blitzie.com and press get a demo to learn how Blitzie transforms your SDLC from AI assisted to AI native. Weekend 6 is the information pipeline. In this is basically about using some underutilized tools. Specifically, notebook, alarm, and gamma are extremely powerful, especially when it comes to information presentation. Too often I see people who try them once or twice, and then kind of forget about them. The goal for this weekend is to build them into your actual workflow so you can consistently turn complex or raw or messy information into polished outputs without the manual grind. Deliverable here is a reusable workflow for turning raw information into polished output. For example, taking one set of inputs and turning them into an executive summary, an FAQ, a presentation deck, and even in the case of gamma, a website, all again with a single reusable workflow. So first up, take a set of information that you need to process. This could be a report or a set of reports, a set of meeting notes, multiple articles on a topic, a set of podcast transcripts, and you're going to upload those all into notebook LM. From there you can create a hundred different materials around them. You can generate an executive summary, you could create a key terms glossary, you can build an FAQ, ask yourself what would a skeptic ask about this, what are the counter arguments, what's missing. You can generate multimedia around it like an audio or video podcast version of it, and you can also generate presentations. Now for presentations, you can either use notebook LM natively, which has gotten really good at presentations with the advent of nano banana pro, or you can take an outline or executive summary from notebook LM and use gamma's presentation building tools to do it over there. With gamma, you can turn a single set of information into multiple formats at once. For example, building a presentation and a slide deck at the same time. As a real example, when I was testing which tool I wanted to build the background material for this podcast in, I once again did Jen's Bark and Manus, but I also did gamma. And what I wanted to do with gamma was one put in all the text that I wanted, not just a summary as would be the case for a slide presentation. And two, I wanted to turn it into a website where everything was all there all at once. You can see here, this is just their default style with a prompt to have the visuals be a 1950s retro futuristic style, and it has the exact text that I gave it, but you don't have to give it exact text, you can also give it a summary and let it do a bunch of the work. Again, the point of this exercise is to build an information presentation pipeline while also getting a feel for these two very powerful and underutilized tools. In weekend seven, we're moving into the world of automations. The interfaces for some automation tools are not necessarily as intuitive right now as a simple chatbot interface, but figuring out how to take common workflows that you experience day in and day out and automate them is one of the highest leverage ways to increase the value that you're getting out of AI. Now, if you have a different idea for what you would like to automate, by all means go do it. Just suggesting what I think could be a common and useful thing, which is to build some sort of content distribution machine. The deliverable in this case is a working automation that handles a big chunk of your content production or distribution workflow. And even if you are not a content creator, if you're on social media at all, you probably are in some way, you can also do this for your business, if not for yourself personally. Your automation is going to have the following components, a trigger, i.e. what kicks it off, could be a new note, added a new document in a folder, a Slack message with a keyword, a form submission, a calendar event, etc., a transformation. What is the automation supposed to do to the input? Is it supposed to summarize, extract key points, generate drafts, categorize, etc.? From there, it's routing. Where does the output go? Is it sent to a Slack channel, to an email, to a document, to a spreadsheet? Next is a human approval step. You want to keep yourself in the loop to review and improve before the final action. And then logging, you want to record what happened with timestamps links, status notes, even if it's just a row in a spreadsheet. I mentioned before that I think you should probably try just one of Lindy, NADN, or honestly some of the integrated workflow builders in Slack and Notion, and stick with it for the sake of this first project. An example automation would be something like, when I add a note to my content ideas, Notion Database, it automatically summarizes it, generates three tweet variations, and a LinkedIn post draft, and then sends those drafts to a Slack channel for my review. If you don't have a project, I'd suggest maybe a weekly reading digest. Every Friday compile articles you saved from that week, whether it's from an official bookmarking tool or simply a form where you paste links, and have the automation summarize each article and email yourself a digest with the summaries and the links. Maybe for good measure, have it create a LinkedIn post as well. Now if you were looking for something more advanced, you can chain multiple automations together, you could build a system where publishing one piece of content triggers a cascade. Draft social posts schedule them to a queue, log the content in a database, update a content calendar. You could also add conditional logic, like different distribution paths for different types of content, or you could add error handling, i.e. what happens when something fails. On weekend eight, we're going to build a second automation, and this time we're really going to try to enhance your productivity workflow. I think in a lot of ways, a content automation and a productivity automation is your minimum viable automation stack. If weekend seven was about creating an output, this weekend is about managing input and follow through. These are the types of automations that keep things from falling through the cracks. So we're going to start with the same structure as weekend seven. Your automation needs a trigger, transformation, a routing, a human approval step, and logging. And without knowing anything about your work, I would suggest either option A, building an email inbox follow-up system, or option B, building a lead response system. So in the case of the inbox and follow-up system, the trigger is labeling or tagging an email or forwarding it to a special address. The transformation would be summarizing the email, extracting the asks, drafting a reply, and identifying follow-up tasks. The routing is to send the draft reply for you to review and create a follow-up task in your task manager. And the log might be recording the contact name, topic, next step, and due date. Option B on the lead response system, the trigger would be something like a form submission or an inbound DM. The transformation would be categorizing the lead, drafting a personalized response, and assigning a pipeline stage. The routing would be either a Slack notification or recording in a Notion Air Table or Spreadsheet. And the log would include contact info, source, stage, next action, and reminder date. Now I should note that these are the sort of automations that you're starting to see built in to a lot of the tools that you're using. So for example, if you're using Gmail or superhuman as your email inbox, there's going to be some ability to build these types of automations right from there. Same with most CRM systems. HubSpot for example has a lot of this type of automation building right in the native suite. One final idea if neither of those seem particularly useful to you could be a meeting prep bot, basically before any calendar event with an external person, look them up on LinkedIn, check your email history with them, generate a one paragraph briefing that includes who they are, your last interaction and what you might discuss, and deliver to you some standard amount of time before the meeting via Slack or email. Weekend 9's project reflects the idea that 2026 is going to be a lot about giving AI better context for it to give you better results. So with this project, we want to build a resource that helps AI know your context. Most people have to re-explain their situations every single conversation, especially if you're jumping around LLMs as I'm suggesting. This weekend, you're going to build a system that fixes this so that you can stop repeating yourself. So with this, we're going to create a professional context document. This can be things like your role in responsibilities, your key projects and their current status, your communication style preferences, common tasks you need help with, domain specific terminology or context, and what you don't want, such as formatting preferences or things to avoid, you're then going to create your AI operating system structure. So in notion or drive or whatever you'll actually use, you can create sections for your AI playbook, your best prompts for example. You can create things like an automation log or a decision log, and then finally, you're going to create a capture habit, an inbox for all AI-related notes, voicememos or interesting prompts you see and things to try, plus a 15-minute weekly review to process the inbox and update your system. The goal is to have a single place to store, retrieve and reuse everything you've made throughout this resolution. You have a professional context stock that actually makes AI conversations better, and you have a habit even if it's just a calendar reminder to maintain the system. Now, the advanced modifier here is to create multiple context profiles, for example, separating work in personal, and also include actual examples of your writing or emails in the context file. Now, I think this is a good exercise for most to do, but I will say that my instinct is that memory is going to be such a big push for all the LLMs next year, that the context for this contact engineering project could change very quickly throughout the course of 2026. For weekend 10, we're moving back to vibe coding, but we're going to level it up a little bit. In weekend 1, we used AI to help us build a resolution tracker. This weekend, however, is about building something that actually uses AI, basically an application or web experience that takes advantage of AI itself. Google AI Studio has made this extremely easy. With Google AI Studio, you can add photo editing via nano banana. You can create conversational voice apps that actually embed a voice agent inside the thing that you're building. You can animate images with VO, add a context to where chatbot to your app, and a variety of other things as well. So a couple of ideas for things that you could build are a chatbot trained on specific knowledge, and this is where that context doc could come in, your expertise, your company's FAQs, a body of research, a personal knowledge base. You could also create a voice agent for a specific type of interaction, such as a language practice partner, a mock interview coach, or a meeting role player who acts as a difficult stakeholder. If you're feeling really ambitious, you could try to create a mini agent that processes information, such as ingesting documents, extracting specific information, and creating outputs in a structured format such as images or video. The big way to make this even more advanced is to build something that's not just for you but also for others. Give it to real people, get feedback, and iterate. Move it in other words from side project to prototyping something real. So that is our 10 weekends, but on the assumption that some of these things you will have already done, and because I think this is going to be important for a bonus or substitute weekend, we're going to run an agent evaluation gauntlet. The deliverable here is an agent store card in best you cases note, so you know what to actually delegate to agents going forward. Most people's mental model is still in chatbot. This weekend is going to update that mental model with firsthand experience of what agents can and can't do reliably. So I would suggest that the very least testing Manus and Jen Spark, against what you can get in the base LLMs. I would run a set of three standardized tasks through each tool you want to use. A research and synthesis project, an operations task like turn this project description meeting notes slash rough plan into a checklist with timeline and roles assigned, and especially a production task, generate the following assets from this input, a summary doc, an email announcement, five social posts and a one page overview. My guess is that it'll be a little bit blurry at the beginning around what you should use regular LLMs for versus more agent capacity. But as a for example, when I am doing a complex pipeline from data analysis to visualization, it's really hard to do well inside of chat GPT or Gemini, but pretty trivial for Manus and Jen Spark. Once you've got your tasks score each agent on accuracy and hallucinations, citations and traceability, i.e., can you verify what they did, ability to follow constraints and instructions, output usefulness without heavy editing, and repeatability would it do the same thing again. After testing, you can document which agents are good for what, what you'd actually trust them to do unsupervised, what still requires too much oversight to be worth it, and two to three specific tasks you're going to try to delegate to agents going forward. So if you do all of these at the end of these 10 weeks, you'll have built a personal tracker, a deep research workflow, visual reasoning skills, two active automations, a deployed AI power tool, an AI tool topography, an analysis pipeline, an info processing stack, and a personal AI operating system. You will also, as I said at the beginning, be ahead of 99.99% of people when it comes to the full breadth of what AI can do. I'm really excited to see what you guys build with all of this. Like I said, check out aidbnewyear.com, which will be a community hub for anything anyone wants to share about this. Obviously I should say, I think this goes without saying, but just for the sake of it, that is a free experience, there's no paid upsell or course or anything here. Just a place for people to share what they're doing as we kick off a great 2026. For now, one more big thank you for an awesome 2025. Really appreciate all of your listening, watching engagement. And until next year, peace.
Key Points:
The AI Daily Brief podcast discussed a 10 Weekend AI Resolution for 202
The resolution includes projects to enhance AI skills through practical applications.
Projects cover AI tools, model mapping, deep research, data analysis, visual reasoning, information pipeline, and automations.
Sponsors of the episode include Super Intelligent, Robots and Pencils, and Blitzie.
Summary:
The AI Daily Brief podcast introduced a 10 Weekend AI Resolution for 2026, aiming to enhance AI skills through practical projects. The resolution includes tasks such as model mapping, deep research, data analysis, visual reasoning, information pipeline, and automations. Participants are encouraged to engage with AI tools, create visual explanations, analyze data sets, and build information presentation workflows.
The podcast also featured sponsors like Super Intelligent, Robots and Pencils, and Blitzie, offering AI solutions and services. This structured approach aims to help individuals develop fluency in AI tools and techniques over weekends throughout the year.
FAQs
The AI Daily Brief is a daily podcast and video that covers the most important news and discussions in AI.
To get an ad-free version of the show, you can subscribe on Patreon for $3 a month at patreon.com/airdlybrief.
The 10 Weekend AI Resolution is a self-guided path to AI fluency, consisting of projects that expose you to a variety of AI tools.
The goal of the first weekend project is to vibe code a resolution tracker web app to track your progress through the 10 weeks of the resolution.
You can engage in the projects and share your work at aidbnewyear.com, a platform for collaborative learning and sharing.
Chat with AI
Loading...
Pro features
Go deeper with this episode
Unlock creator-grade tools that turn any transcript into show notes and subtitle files.