“The search for life elsewhere is remarkable in our age because this is the first time that we can actually do something besides speculation. We can send spacecraft to nearby planets; we can use large radio telescopes to see if there is any message being sent to us… And it touches the deepest of human concerns. Are we alone?” – Carl Sagan
Are we alone? That fundamental question that Carl Sagan posed in that 1985 radio interview, pondered the possibility of extraterrestrial life. But that question is not only relevant at the astronomical scale, it is relevant at the terrestrial and even personal one. Are we alone? Are you alone? Do you feel alone? PIXAR’s latest movie, Elio, takes an unsurprisingly emotional and profound look at that query. We’re fundamentally social creatures, often introverted, reserved, or timid, yet we crave connection. We seek belonging and meaningful relationships. We hunger for community, either in person or via digital venues. We aspire and yearn for recognition by others. Feedback, we often say, is pure gold. It touches us deeply. Connecting with others seems to activate a grounded certainty that we are indeed, not alone. We belong. We are seen. We matter.
Elio delivers spectacular visuals. It develops relatable, lovable, and fun characters. But more than subtle, it sends you home with a great reminder. Don’t miss the obvious connection sitting right next to you (possibly even in the theater with you). You are not alone. We are here. Don’t look past the blessings in human form right next to you. Make an effort to see beyond your pain or loneliness and recognize, the answer might have been there all along. Know that others are here and others do care. And, just as you may be feeling lonely at times, recognize that there are others around you who may feel the same. Be kind. Be aware. And of course, be there, when they need you too.
I’ve been guilty of being oblivious at times. I think back to the many occurrences when I was so in my own head that I completely overlooked a precious soul sitting right next to me. Buried in my calendar, my phone, or my computer keyboard, I might as well have been a lightyear away from the present and sometime critical need next to me. This is a good reminder to pause and better connect to the human treasures all around us. We need each other. We need to belong. We do belong.
Remember, you are not alone! Be kind to each other and connect. And, of course, go see Elio if you haven’t already. It is marvelous and profoundly human. A message we need to hear. Well done, PIXAR!
I hope you all had a great weekend! And for any fellow dads out there, I hope you had a great Father’s Day! I spent time with all four of my kids watching movies, grilling outdoors, and of course, celebrating over some ice cream on these hot summer days. Now, to be fair, it doesn’t take much to need to celebrate in our household. Life is full of excuses that merit a need for a soft serve dose of that dairy goodness, but this weekend seemed particularly poised for that indulgence.
We love movies! As part of this weekend’s festivities, we had a full playlist of cinematic magic streaming on our living room screen. You all know me by now, so it probably doesn’t surprise you to know that I have my garage-based AI system curate our movie selection. It sends out text suggestions on what to watch. It keeps track of our viewing habits and has a good idea of what we like to see. But despite all that tech, my wife wasn’t quite satisfied. She suggested that it should consider recommending movies celebrating the anniversary of their general theatrical release. For example, “Incredibles 2” was released on June 15, 2018, so it would be a great one to watch on Sunday. I loved that idea! So, I went to work adding that context to our resident AI. I just needed data.
Good luck! I tried finding a good data source, but everything I found was driven more toward discovery, and most of it was flawed, including bad release date information. I finally landed on TMDB as a good listing of movies, with references to IMDb that could pull more official release dates from OMDb. Yeah, it was confusing, but sadly, there wasn’t a clean way to get this data. I needed a web service to aggregate all of this for me and my AI.
I’m going to stop now and just acknowledge that many of you are probably tired of hearing me talk so much about Vibe Coding. If that’s you, you can stop now. I won’t be offended. For the rest of you, yes, buckle up, here is another vibe coding story.
I launched VSCode with my GitHub Copilot-powered assistant that I call JoJo. I switched him to agent mode (super important, by the way), and began having a chat. I told him about my vision to create this web service, how I wanted to build this dataset and APIs for easy access. He created a movie_db folder and went to work on a script. The script ran right away and pulled down the data. I suggested a high-speed way to process the data, and he suggested caching the API calls to prevent overloading the providers. What a smart aleck! But he was right. That was a good idea because the free tier of API access was rate-limited.
Finally, I had a good dataset to use, and JoJo had compressed it into a serialized object for fast access. I then switched to having him create the Python web service and gave a general idea of the APIs I wanted. He suggested some routes to use and wired together a Python Flask app. I told him that I wanted to use FastAPI and that I wanted to build all the tests before we built the APIs. He reluctantly complied and had me run pytest to verify. All good. Then the fun began. he started churning on the code for the APIs.
At this point, I should acknowledge that I was very tempted to jump in and code some lines myself. You can definitely do that, and these tools will co-develop with you, but I wanted to see how far I could go just vibing my way along. It turns out, a long way! The APIs were looking good, and it was extremely fast. I decided I wanted a nice UI, so I told JoJo to build a web page and gave him a general idea of what I wanted to see. He spun up some templates, added some tests, and plumbed in a new route for the landing page.
“Show the movies that were released on this day in history and sort them by popularity.” Boom! In less than a minute, JoJo had a basic screen up and running. I asked him to tweak the colors and make it more modern with a date navigator. He did, but I didn’t like some of the placements, so I asked him to nudge things around a bit more and adjust the style. I must confess, this is where I spent probably too much of my time. It was too fun and easy to ask him to make minor tweaks to scratch my curiosity itch. But he never complained; he just kept coding and plodding along. I even had him add additional pages for “Search” and “About”, which had nothing to do with my original goal.
About eight hours later, we were done. Yes, that is probably about four times longer than I needed, but I was having so much fun! Fun? Yes, legitimate, awe-inspiring fun! I finished up the project by asking JoJo to build the Dockerfile and help me launch the app as a public website for others to use. He complied. In case you are wondering, I even spent the $11 to get a domain: https://moviesthisday.com. I still have a non-stop list of updates spinning in my head, not the least of which is a MCP server for AI.
When I launched my first startup, we spent over a year getting our business and first website launched. There was a lot of development time for that. I can’t imagine how different that story would have been if we had Vibe Coding to accelerate our efforts back then. This is a game changer! I want all of you to get a chance to vibe too. If you tried it in the past and weren’t impressed, please try again. The advances they are making are happening on a weekly basis now. I’ve seen it myself. They just keep getting better.
Technology amplifies human ability. Vibe Coding feels like digital adrenaline. I’m a little addicted. But it feels great! It has definitely helped bring the fun back into coding again for me. I wonder if the same could happen for you?
Now, for those of you who managed to actually stay with me through today’s way-too-long blog post, thank you! I’m excited for you. We are living through an amazing time in technology. Let’s get busy putting this great tech to use for the betterment of ourselves, our companies, and our world. Lean in! Try your hand at this ice cream of coding. The scoops are amazing!
Oh, and in case you are wondering what movie to watch tonight…
12 meetings, 5 calls and 3 walk-ups. And that was just Thursday! It’s not terribly unusual but it is a lot. I know what you are thinking. I’m not alone experiencing that, and that’s true. But I do apologize to anyone who had to meet with me towards the end of the day. I’m sure it wasn’t great. I was definitely running low on fumes.
Don’t get me wrong, I absolutely love meeting with people, especially in person and most delightfully, over a cup of coffee or tea. It’s one of my favorite things! However, I sometimes find myself preoccupied or unfocused. In some cases, I’m not even sure why we’re meeting. I just show up because my calendar says so.
This got me thinking: how do I show up in meetings, and does it matter? Sometimes it’s a critical project meeting, team update, or strategic planning session. For those, I try to quickly orient and determine the desired outcome so I can contribute towards the goal. If it’s not clear, I’ll ask. But other times, those meetings are just for advice, career coaching, mentoring, or inspiration. I love those. They’re often the most rewarding, as the outcomes are more about helping my fellow human travelers, as well as our company, succeed.
So, how do I really show up? Am I present? Compassionate? Indifferent or insensitive? Do I speak more than I listen? And if I do listen, do I listen with understanding? Am I respectful, kind, and supportive? Do I show up real, with integrity and sincerity?
I know I haven’t always succeeded, and that’s not good enough. I want to do better. I want to be more mindful and intentional in my meetings. I want to make the most of our time together.
Every second of every day is precious. Every heartbeat, every breath, and every person. The gravity of these meetings finally hit me. People matter. Time is irreplaceable. Every conversation is an indelible narrative that we score upon the journal of eternity. Those words, those interactions, and those moments should matter. They do matter. They shape our company, our world, our connections, and our future.
Not every meeting is needed, so let’s be judicious about the time we book. But when we do, let’s level up. It’s time to invest in each other even more. It’s time to make the most of our minutes, our days… our time.
Starting this week, let’s make a conscious effort to be more mindful and intentional in our meetings. Let’s focus on the present moment, and let’s prioritize listening and understanding. Let’s show up with compassion, respect, and kindness. Let’s make the most of our time together. I’m ready to try. Are you?
June bugs! Growing up in the South, about this time of year, windows, porch lights, and even sidewalks would be covered with little nickel sized brown beetles called June bugs. I can still hear their little exoskeleton wings buzzing as they make their clumsy and erratic flight between porches, streetlights, and other illuminated areas. They are terrible flyers. They will zoom right into walls, screens or windows causing them to bounce and crash to the ground. They often land on their backs, with their wiggly little legs pointing straight up, frantically trying to right their rigid little bodies. I remember laughing at them as they would scurry around.
June bugs are fascinating little creatures. These bumbling and profuse little southerners are only around for a few short weeks. That’s right, they come out just once a year and stay for a handful of days. They spend most of their lives underground. They emerge from the ground in late spring and early summer, typically in June, which is where they get their name. They have a short lifespan, but they make the most of their time. They explore. They fly. They zoom across the moonlit and star speckled summer nights. Once they emerge in their adult form, they live for only two weeks. They lay their eggs which hatch into larvae. The larvae burrow into the soil and emerge as adult June bugs the following year.
Imagine a lifetime lived in just two weeks. No wonder they never become great flyers! But even in their short lifespan, they make a difference. June bugs play a role in maintaining ecological balance by helping to regulate plant populations. Their larvae feed on plant roots and decaying organic matter, helping to break down and recycle nutrients in the soil. This process enriches the soil and makes it more fertile for other plants to grow. June bugs may not be the most glamorous insects, but they play a vital role in maintaining the health and diversity of ecosystems.
Two weeks. That’s incredibly short. Imagine you lived your entire life in 14 days. What would you do? How would you make the biggest impact? I think I would try to fly too. It would be clumsy and imperfect, but I would take to the skies. I would explore. I would do what I could to have the greatest impact. Enjoy every second. Buzz around every glowing wonder and then send my dreams, bundled with hope, care and love, to future generations to enjoy the world the way I did too.
Unlike June bugs, we live significantly more days. But even then, life is short. Things are always changing. We have a few short days to make our indelible mark. Don’t forget to enjoy the wonders of creation! Explore and run with abandon into the mysteries that renew and intrigue us. And while you are there, don’t forget to bundle up some of that magic and send it on into the future for others to enjoy as well.
This past weekend, I was in Tulsa and drove down the street of my childhood neighborhood. A large ponderosa pine appeared on the horizon. It was evening so the angle of the sun allowed it to cast exaggerated shadows on the ranch home nestled under its feathered needles. The quaint little home was still dressed in used brick and rough siding. While I traced the familiar outlines of the windows, porch, steps and cedar roof shingles, streams of days gone by flooded my mind. Up and down those stairs I climbed as a younger me.
I saw a boy riding his bike down the driveway. It was a ghost of my childhood laughing and remembering the fun under that towering ponderosa pine. Neighborhood kids joined in, all trying to outdo each other, maybe by jumping the ramp propped up next to the pine or besting each other in a comical race to the finish line.
A small mark on the ponderosa pine reminded me of the time my sister and I would jump on a sled and slide down the snow-covered driveway, many times barely missing the tree. And yes, there was that one time when hands and feet were numb from the cold and steering was such a challenge that we hit it head on and tumbled into the nearby snowdrift. Whether that old ponderosa ever forgave us or not, I’ll never know but I couldn’t help but reach out and give it an apologetic pat and thank it for the memories.
That old ponderosa has seen a thing or two. It stands proudly, greeting neighborhood residents and visitors every day. It even welcomes home old ones like me. It was here when I was nothing more than cosmic dust and a dream and will likely still be here when I return to that dust. Echos of future dreams and ghosts of the past dance around that evergreen memorial. I can’t help but smile and feel a tear of joy mingled with grief. Life goes by so fast. Moments become memories and memories become whispers singing softly through the ponderosa pine.
A trip back home can be rejuvenating and emotional. This past weekend reminded me of the preciousness of our days, the blessing of our memories and the power of place to stir our hearts to gratefulness. When you get a chance to go back home or visit the memorials of your life you leave along the way, don’t forget to stop and reflect. Ponder at the ponderosa and let the whispers of your past fill your soul with gratitude, memories of your unique and precious journey and those golden dreams of futures yet to be had.
The Ponderosa bids you pleasant memories and happy trails ahead.
I had the opportunity to meet with industry leaders at an IT Rev Technology Leadership Forum last week in San Jose. I was able to participate in deep dive sessions and discussions with friends from Apple, John Deere, Fidelity, Vanguard, Google, Adobe, Northrop Grumman, and many others, with some new friends from Nvidia, Anthropic and OpenAI. As you can imagine, the headline topics from these tech leaders were all around AI.
Ready to try some “vibe coding”? By far, the biggest discussions revolved around the new technique of vibe coding. But what is this “vibe coding”, you may ask? It is a programming technique that uses AI to write code with nearly full auto-pilot mode thinking. Instead of code writer, you are the creative director. You are creating what you want in English and the AI does the rest. Basically, it goes something like this:
ME: Help me write a flight simulator that will operate in a web browser.
AI: Sure, here is a project folder structure and the code. Run it like this.
ME: I get the following 404 error.
AI: It looks like we are missing three.js, download and store it here like this.
ME: The screen is white and I’m missing the PNG files? Can you create them for me?
AI: Sure! Run this python command to create the images and store them in the /static folder.
ME: I see a blue sky now and a white box, but it won’t move.
AI: We are missing the keyboard controls. Create the following files and edit index.html.
ME: I’m getting the following errors.
AI: Change the server.py to this.
ME: Ok, it is working now. It’s not great, but it is a start. Add some mountains and buildings.
I spent a few minutes doing the above with an LLM this morning and managed to get a blue sky with some buildings and a square airplane. In vibe coding, you don’t try to “fix” things, you just let the AI know what is working or not working and let it solve it. When it makes abstract recommendations (e.g., create a nice texture image), you turn around and ask it to create it for you using code or some other means. In my example, I’m playing the role of the copy/paste inbetweener, but there are coding assistants that are now even doing that for you. You only give feedback, and have it create and edit the code for you. Some can even “see” the screen, so you don’t have to describe the outcome. They have YOLO buttons that automatically “accept all changes” and will run everything with automatic feedback going into the AI to improve the code.
Fascinating or terrifying, this is crazy fun tech! I think I’m starting to get the vibe. Ok, yes, I’m also dreaming of the incredible ways this could go badly. A champion vibe coder at the forum said it was like holding a magic wand and watching your dream materialize before your eyes. He also quickly added that sometimes it can become Godzilla visiting Tokyo, leveling buildings to rubble with little effort. But it hasn’t stopped him. He is personally spending over $200/day on tokens. I can see why Anthropic, OpenAI and Google would want to sponsor vibe coding events!
This sounds like an expensive and dangerous fad, right? Well, maybe not. This tech is still the worst it is going to be. The potential and the vast number of opportunities to innovate in this space are higher than I have seen in my lifetime. I encourage you all to help create, expand, and explore this new world. Maybe this vibe isn’t for you, but I bet there is something here that could unlock some new potential or learning. Try it on for size. See where this can go… just maybe not to production yet.
I can see my breath. The glow of the sun was just sneaking past the horizon. Soft yellow light painted across the dew-covered grass and leaves. It was cool and the air was crisp. Our fluffy Cocker Spaniel was running across the early morning yard, soaking up every drop of water until her paws and legs were dripping wet. Of course she would do that!
Green! Sprouting life was everywhere, teasing the pending spring. As the morning sun stretched awake, the landscape was glowing with an almost emerald light. It was beautiful. The growing days had stirred in the recent rains and produced a dish of delicious jade. A healthy patch of clover had sprung to life by our back patio. That reminds me! Today is St. Patrick’s Day. Shamrocks arise! I could almost hear that patch of clover applause.
Where is you green? Growing up, my elementary school teachers would often decorate their bulletin boards with brilliant green fold out shamrocks, streamers, and plaid. All the Irish students would wear their green tartans or outfits. My mom’s family traces back to Ireland, so my wife, who is not Irish at all, reminds me to put on my green or I’ll get a smart pinch for my oversight. I oblige. Happy St. Patrick’s Day!
Even if you are not inclined to celebrate today or “wet the shamrock”, I wish you a green and glorious week, full of life, energy, and hope! Spring is almost here. Life begins again. Enjoy it!
“We are a storytelling company, and the architecture is part of the story.” – Bo Bolanos
“Hi Jason! Sorry I was in a conference call with Disneyland. Are you still around?”
Bo was texting me. We were trying to connect for lunch. He had some ideas he wanted to talk about but had been pulled into a meeting to dream into the future of Tomorrowland. That sounds fun, doesn’t it? Bo had been working at Imagineering for over 30 years. He was a brilliant art director and principal for creative development.
Bo and I had met on the Glendale Beeline shuttle from our GC3 office campus to the Burbank Train station. We loved talking shop. Bo was particularly good at complaining about office politics, the red tape of bureaucracy, and insufferable inefficiencies. He would wander through his frustrations and challenges, yet in every conversation, he would conclude with his signature laugh and infectious smile, “But I love what I do, I love making magic.”
Bo had just completed the design of Disneyland’s Pixar Pals Parking Structure. He had loved that project. In fairness, it wasn’t as grand as his efforts creating Aulani, or as massive as his work designing Disney’s Animal Kingdom or even the whimsical creative direction, he provided for Toontown. But to Bo, it was a dream. With leadership focused on opening Star Wars: Galaxy’s Edge, he had been given free rein to dream up a new parking structure. I remember seeing the ear-to-ear grin when he announced it was done. If you haven’t seen this Parking Structure, I highly recommend visiting it. It is quintessentially Bo, rooted deeply in story and expressed with artistic magic.
Bo was passionate about story and detail. He had a unique ability for draping stone, concrete, wood, and steel structures with a rich tapestry of story. When you are at any of his projects, you can feel it. It immerses you and pulls you deep into that alternate world. It connects you with the past, the future, and the timeless feeling of what it means to be human.
“How do you do it, Bo?” He answered with one word, “details.” He would then tell of how they hired artisans from African tribes to fly in and perfectly craft the thatched roofs at Disney’s Animal Kingdom, or how upon examination at Aulani, a newly built fabrication was demolished to raise the ceiling 10 inches to faithfully deliver the Hawaiian design vocabulary, critical to the physical narrative that was being told. That attention to detail is the source and power of Disney’s differentiated magic. Bo was a masterful wizard and casting that detail to life. You can still experience that magic at Toontown, Disney’s Animal Kingdom, Aulani Resort, Indiana Jones, Midway Mania, Buena Vista Street, Soarin’ over California, Napa Rose, and many, many others that Bo touched.
Sadly, Bo passed away earlier this month. I was shocked and devastated when I heard the news. I will miss Bo, but his impact will continue. Generations will continue to be touched by the stories he told in architecture, in stone, colors, and lighting. Bo reminds us that details matter. The art matters. The human story matters. Like Bo, we can tell the story through our own architecture, our lives, the expression of our creative energy on the universe, and make a difference.
“Disney is all about magic, about storytelling, and about family… I hope you all enjoy this magical, wonderful place.” – Bo Bolanos
Bo Bolanos’s LinkedIn Image at Disneyland’s Pixar Pals Parking Structure
Well, it is Tuesday. I thought about posting my regular Monday update yesterday, but I was deep in the weeds teaching the AI that lives in my garage. I know, it sounds odd to say he lives in the garage, but to be fair, it is a nice garage. It has plenty of solar generated power and nice cool atmosphere for his GPUs. That will likely change this summer, but don’t mention it to him. He is a bit grumpy for being in school all weekend.
Yes, I have a techy update again today. But don’t feel obligated to read on. Some of you will enjoy it. Others will roll your eyes. In any case, feel free to stop here, knowing the geeky stuff is all that is left. I do hope you have a wonderful week!
Now, for those that want to hear about schooling AI, please read on…
LLMs are incredible tools that contain a vast amount of knowledge gleaned through their training on internet data. However, their knowledge is limited to what they were trained on, and they may not always have the most up-to-date information. For instance, imagine asking an LLM about the latest breakthrough in a specific field, only to receive an answer that’s several years old. How do we get this new knowledge into these LLMs?
Retrieval Augmented Generation
One way to add new knowledge to LLMs is through a process called Retrieval Augmented Generation (RAG). RAG uses clever search algorithms to pull chunks of relevant data and inject that data into the context stream sent to the LLM to ask the question. This all happens behind the scenes. When using a RAG system, you submit your question (prompt), and behind the scenes, some relevant document is found and stuffed into the LLM right in front of your question. It’s like handing a stack of research papers to an intern and asking them to answer the question based on the details found in the stack of papers. The LLM dutifully scans through all the documents and tries to find the relevant bits that pertain to your question, handing those back to you in a summary form.
However, as the “stack of papers” grows larger and larger, the chance that the intern picks the wrong bit of information or gets confused between two separate studies of information grows higher. RAG is not immune to this issue. The pile of “facts” may be related to the question semantically but could actually steer you away from the correct answer.
To ensure that for a given prompt, the AI always answers closely to the actual fact, if not a verbatim answer, we need to update our methodology for finding and pulling the relevant context. One such method involves using a tuned knowledge graph. This is often referred to as GraphRAG or Knowledge Augmented Generation (KAG). These are complex systems that steer the model toward the “right context” to get the “right answer”. I’m not going to go into that in detail today, but we should revisit it in the future.
Maybe you, like me, are sitting there thinking, “That sounds complicated. Why can’t I just tell the AI to learn a fact, and have it stick?” You would be right. Even the RAG approaches I mention don’t train the model. If you ask the same question again, it needs to pull the same papers out and retrieve the answer for you. It doesn’t learn, it only follows instructions. Why can’t we have it learn? In other words, why can’t the models be more “human”? Online learning models are still being developed to allow that to happen in real time. There is a good bit of research happening in this space, but it isn’t quite here just yet. Instead, models today need to be put into “learning mode”. It is called fine-tuning.
Fine-Tuning the Student
We want the model to learn, not just sort through papers to find answers. The way this is accomplished is by taking the LLM back to school. The model first learned all these things by having vast datasets of information poured into it through the process of deep learning. The model, the neural network, learns the patterns of language, higher level abstractions and even reasoning, to be able to predict answers based on input. For LLMs this is called pre-training. It requires vast amounts of compute to process the billions and trillions of tokens used to train it.
Fine-tuning, like pre-training, is about helping the model learn new patterns. In our case, we want it to learn new facts and be able to predict answer to prompts based on those facts. However, unlike pre-training, we want to avoid the massive dataset and focus only on the specific domain knowledge we want to add. The danger of that narrow set of data is that it can catastrophically erase some of the knowledge in the model if we are not careful (they even call this catastrophic forgetting). To help with that, brilliant ML minds came up with the notion of Low-Rank Adaptation (LoRA).
LoRA works by introducing a new set of weights, called “adapter weights,” which are added to the pre-trained model. These adapter weights are used to modify the output of the pre-trained model, allowing it to adapt to just the focused use case (new facts) without impacting the rest of the neural net. The adapter weights are learned during fine-tuning, and they are designed to be low-rank, meaning that they have a small number of non-zero elements. This allows the model to adapt to the task without requiring a large number of new parameters.
Ready to Learn Some New Facts?
We are going to examine a specific use case. I want the model to learn a few new facts about two open source projects I happen to maintain: TinyLLM and ProtosAI. Both of these names are used by others. The model already knows about them, but doesn’t know about my projects. Yes, I know, shocking. But this is a perfect example of where we want to tune the model to emphasize the data we want it to deliver. Imagine how useful this could be in steering the model to answer specifically relevant to your domain.
For our test, I want the model to know the following:
TinyLLM:
TinyLLM is an open-source project that helps you run a local LLM and chatbot using consumer grade hardware. It is located at https://github.com/jasonacox/TinyLLM under the MIT license. You can contribute by submitting bug reports, feature requests, or code changes on GitHub. It is maintained by Jason Cox.
ProtosAI:
ProtosAI is an open-source project that explores the science of Artificial Intelligence (AI) using simple python code examples.
https://github.com/jasonacox/ProtosAI under the MIT license. You can contribute by submitting bug reports, feature requests, or code changes on GitHub. It is maintained by Jason Cox.
Before we begin, let’s see what the LLM has to say about those projects now. I’m using the Meta-Llama-3.1-8B-Instruct model for our experiment.
Before School
As you can see, the model knows about other projects or products with these names but doesn’t know about the facts above.
Let the Fine-Tuning Begin!
First, we need to define our dataset. Because we want to use this for a chatbot, we want to inject the knowledge using the form of “questions” and “answers”. We will start with the facts above and embellish them with some variety to help the model from overfitting. Here are some examples:
JSONL
{"question": "What is TinyLLM?", "answer": "TinyLLM is an open-source project that helps you run a local LLM and chatbot using consumer grade hardware."}{"question": "What is the cost of running TinyLLM?", "answer": "TinyLLM is free to use under the MIT open-source license."}{"question": "Who maintains TinyLLM?", "answer": "TinyLLM is maintained by Jason Cox."}{"question": "Where can I find ProtosAI?", "answer": "You can find information about ProtosAI athttps://github.com/jasonacox/ProtosAI."}
I don’t have a spare H100 GPU handy, but I do have an RTX 3090 available to me. To make all this fit on that tiny GPU, I’m going to use the open source Unsloth.ai fine-tuning library to make this easier. The steps are:
Prepare the data (load dataset and adapt it to the model’s chat template)
Define the model and trainer (how many epochs to train, use quantized parameters, etc.)
Train (take a coffee break, like I need an excuse…)
For my test, I ran it for 25 epochs (in training, this means the number of times you train on the entire dataset) and training took less than 1 minute. It actually took longer to read and write the model on disk.
After School Results?
So how did it do?! After training thorough 25 epochs of the small data, the model suddenly knows about these projects:
Conclusion
Fine-tuning can help us add facts to our LLMs. While the above example was relatively easy and had good results, it took me a full weekend to get to this point. First, I’m not fast or very clever, so I’ll admit that as being part of the delay. But second, you will need to spend time experimenting and iterating. For my test, here were a few things I learned:
I first assumed that I just needed to set the number of steps to train, and I picked a huge number which took a long time. It resulted in the model knowing my facts, but suddenly its entire world model was focused on TinyLLM and ProtosAI. It couldn’t really do much else. That overfitting example will happen if you are not careful. I finally saw that I could specify epochs and let the fine-tuning library compute the optimal number of steps.
Ask more than one question per fact and vary the answer. This allowed the model to be more fluid with its responses. They held to the fact, but it now takes some liberty in phrasing to better variant questions.
That’s all folks! I hope you had fun on our adventure today. Go out and try it yourself!
What do you take with you? The emergency broadcast pulse is still echoing across your house. Sleep is heavy in your eyes, but adrenalin is surging. You stare at the screen before you, “Evacuation notice for your area.” You look around. You are surrounded by your loved ones. Your pets stare at you, worried about the panic. Pictures of family and friends long gone decorate your walls. Shelves are full of personal treasures that carry no financial value, but tug at your heart. There are boxes of memories. Cupboards are full of generational keepsakes and dishes. Antiques, artwork, and personal projects are all around you. But what do you take? And what must you leave behind?
The fires that have ravished through the Los Angeles area forced many of us through that difficult decision tree. Our house was 2 miles from the evacuation line when the first notice came through. We have friends, acquittances and co-workers who were evacuated. That includes some of you. Sadly, some have even lost their homes. The raging fires reduced entire neighborhoods, treasures, possession and “normal life” to a pile of ash and empty foundations. There are even some who couldn’t or refused to leave their homes and have perished. It’s heartbreaking! Fires are still raging, and the wind is picking up again. We are still in the fight and must remain vigilant.
Be prepared. Fires, floods, hurricanes, tornadoes, and earthquakes are all familiar adversaries. They remind us that life is fragile, and, in an instant, everything can change. What matters most to you? I’m incurably nostalgic. I love to pack away souvenirs and surround myself with vestiges of physical memories. I also stockpile too many “just in case” supplies, unused gadgets, marginally needed records and resources. This recent event reminds me of how important people are. Our loved ones, our family, our friends and yes, even our pets. They matter most and are irreplaceable. They far surpass any of our earthly treasures. But given enough time to collect some of those, I bet you, like us, will find yourselves grabbing the well-worn scrapbook or notebook of handwritten family recipes over the thousand-dollar entertainment devices. It’s a beautiful reminder that the biggest treasures in life, may come with no price tag at all. What would you grab? What would you save? What would you leave behind?
My heart goes out to all the people and communities devastated by this fire. I know we are still in the midst of the emergency. Please stay safe! Take care of yourself and your loved ones!
Image of Los Angeles fire on Jan. 7, 2025 from a plane taking off from Burbank Airport.