<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[SoloDev.app]]></title><description><![CDATA[My name is Brian, I'm from New Zealand, and I like SysOps. In 2025, I will use reasoning language models to build apps. Join me as I also explore machine learni]]></description><link>https://solodev.app</link><generator>RSS for Node</generator><lastBuildDate>Tue, 14 Apr 2026 17:53:49 GMT</lastBuildDate><atom:link href="https://solodev.app/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Installing BMAD v4 for Roo Code.]]></title><description><![CDATA[TL;DR.
This post provides a step-by-step process for installing and setting up the BMAD Method for Roo Code, which integrates AI with Agile methodologies to enhance software development. It covers prerequisites, installation steps, and customisation ...]]></description><link>https://solodev.app/installing-bmad-v4-for-roo-code</link><guid isPermaLink="true">https://solodev.app/installing-bmad-v4-for-roo-code</guid><category><![CDATA[RooCode]]></category><category><![CDATA[AIMethodologies]]></category><category><![CDATA[SoftwareEfficiency]]></category><category><![CDATA[BMADMethod]]></category><category><![CDATA[agile development]]></category><category><![CDATA[software development]]></category><category><![CDATA[Workflow Automation]]></category><category><![CDATA[VS Code]]></category><category><![CDATA[node js]]></category><category><![CDATA[ #TechGuide ]]></category><dc:creator><![CDATA[Brian King]]></dc:creator><pubDate>Fri, 19 Dec 2025 11:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766818731776/adfa0197-6543-41b5-8895-982f5e20971a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-tldr">TL;DR.</h2>
<p>This post provides a step-by-step process for installing and setting up the BMAD Method for Roo Code, which integrates AI with Agile methodologies to enhance software development. It covers prerequisites, installation steps, and customisation options that streamlines workflows and automates tasks, improving the SDLC (Software Development Life-Cycle) for many software solutions.</p>
<blockquote>
<p><strong>Attributions:</strong></p>
<ul>
<li><p><a target="_blank" href="https://github.com/bmad-code-org/BMAD-METHOD"><strong><em>The BMAD GitHub Repo</em></strong></a> <strong><em>↗, and</em></strong></p>
</li>
<li><p><a target="_blank" href="https://bmadcodes.com/"><strong><em>The BMAD Website</em></strong></a> <strong><em>↗.</em></strong></p>
</li>
</ul>
</blockquote>
<hr />
<h2 id="heading-an-introduction">An Introduction.</h2>
<p>This guide walks through the installation and setup of BMAD for Roo Code, a framework for integrating AI with Agile methodologies which improves my software development efficiency.</p>
<blockquote>
<p>The purpose of this post is to show how to setup the BMAD Method so that it works with Roo Code.</p>
</blockquote>
<hr />
<h2 id="heading-the-big-picture">The Big Picture.</h2>
<p>BMAD for Roo Code is essential for my understanding of how to effectively integrate AI with Agile methodologies. This guide walks through the installation and setup process, providing insights into how the BMAD Method enhances my software development by enhancing many, tedious tasks. By following the steps outlined in this post, the result is a streamlined workflow for leveraging innovations that boosts my effectiveness when creating my projects.</p>
<blockquote>
<p>NOTE: The following process results in a single installation that is used across multiple projects.</p>
</blockquote>
<hr />
<h2 id="heading-prerequisites">Prerequisites.</h2>
<ul>
<li><p>A Debian-based Linux distro (I use Ubuntu),</p>
</li>
<li><p>The Roo Code Extension,</p>
</li>
<li><p>VS Code, and</p>
</li>
<li><p>Node v20+.</p>
</li>
</ul>
<hr />
<h2 id="heading-updating-my-base-system">Updating my Base System.</h2>
<ul>
<li>From the (base) terminal, I update my (base) system:</li>
</ul>
<pre><code class="lang-python">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<blockquote>
<p>NOTE: The Ollama LLM manager is already installed on my (base) system.</p>
</blockquote>
<hr />
<h2 id="heading-what-is-the-bmad-method">What is the BMAD Method?</h2>
<p>BMAD is a framework that integrates AI with Agile methodologies that streamlines software development. It utilises specialised AI agents to manage tasks and automate repetitive processes, which improves the SDLC (Software Development Life-Cycle) for many software projects.</p>
<hr />
<h2 id="heading-installing-the-bmad-method">Installing the BMAD Method.</h2>
<p>From the terminal, I install the BMAD Method:</p>
<pre><code class="lang-bash">npx bmad-method install
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766819052178/6704ef38-4e0e-4e97-b398-9097828bb84d.png" alt class="image--center mx-auto" /></p>
<ul>
<li>On the next screen, I provide the full path to my installation directory and tap the ENTER key:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766819530854/b7606fe1-8be6-4948-829b-27aab06eff42.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>NOTE: As I am already in the installation directory, I type a period (.) to install BMAD into the current location.</p>
</blockquote>
<ul>
<li>On the next screen, I use the up/down arrow keys to move the selector, use the SPACEBAR to select/deselect multiple options, and hit the ENTER key:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766820337172/34de4202-38c9-4800-8d48-e75e713a64e4.png" alt class="image--center mx-auto" /></p>
<ul>
<li>On the next two screens, I type ‘y’ for both of the document sharding options, and hit the ENTER key:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766820408438/0f5b4fa5-137b-4732-8d90-bbb2db1d4e7f.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>NOTE: Sharding is used to intelligently break large documents into a collection of smaller documents. The purpose of sharding is to provide LLMs with prompts that do not overwhelm the context window.</p>
</blockquote>
<ul>
<li>On the next screen, I use the up/down arrow keys to move the selector and the SPACEBAR to make my selections, and then I tap the ENTER key:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766821654969/9f0aee8e-49e3-401d-a2de-04d9ffb76de2.png" alt class="image--center mx-auto" /></p>
<ul>
<li>On the final screen, I type ‘n’ to bypass the installation of pre-built web bundles, and hit the ENTER key:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766821800120/413fce7e-837a-4a32-ba3c-127dd6ad99e0.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Here is the screen after a successful installation:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766821980588/46887f8f-8fdf-4b00-83c0-f721bebcacdc.png" alt class="image--center mx-auto" /></p>
<ul>
<li>And here is the installation directory:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766822156686/c14b1395-56b5-49e7-9cc7-9d07ca69203d.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766823459230/0cc43e95-77ab-4ad8-8aa7-35f31b8d16c6.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-editing-the-roomodes-amp-kilocodemodes-files">Editing the .roomodes &amp; .kilocodemodes Files.</h2>
<p>The trick to using a single instance or BMAD across multiple projects is to:</p>
<ul>
<li><p>Open the files in a text editor,</p>
</li>
<li><p>Changing every instance of a relative to an absolute path and saving the results, and</p>
</li>
<li><p>Adding copies of these files into the root directories of every project.</p>
</li>
</ul>
<p>In this post, for example, the installation directory is <code>/home/brian/.roo/commands</code>, so in the .roocodes and .kilocodemodes files I would change a relative path, like <code>.bmad-core/agents/</code><a target="_blank" href="http://ux-expert.md"><code>ux-expert.md</code></a>, so that it becomes an absolute path, like <code>/home/brian/.roo/commands/.bmad-core/agents/ux-expert.md</code>.</p>
<hr />
<h2 id="heading-bmad-v6">BMAD v6.</h2>
<p>I also install BMAD v6 into the same location. Here is the installation command for BMAD v6 (currently in Public Alpha):</p>
<pre><code class="lang-bash">npx bmad-method@alpha install
</code></pre>
<hr />
<h2 id="heading-the-results">The Results.</h2>
<p>The installation and setup of the BMAD Method for Roo Code provides a robust framework for integrating AI with Agile methodologies, enhancing the software development process. By following the detailed steps outlined, I can effectively implement the BMAD Method, customise Roo Code with new modes, and create reusable slash commands to streamline my workflow. This approach not only automates repetitive tasks but also fosters improved collaboration and efficiency throughout the software development life cycle. As I continue to explore and utilise these tools, I must remember to adapt and customise these utilities to best fit the unique needs of each project.</p>
<hr />
<h2 id="heading-in-conclusion">In Conclusion.</h2>
<p>I learned how to seamlessly integrate AI with Agile methodologies using the BMAD Method for Roo Code. This comprehensive guide covered the installation, setup, and customisations that enhances my software development processes and automates many of the tasks I perform. The BMAD method is perfect for developers looking to streamline their workflow with innovative tools.</p>
<p>Until next time: Be safe, be kind, be awesome.</p>
<hr />
<h2 id="heading-hash-tags">Hash Tags.</h2>
<p>#BMADMethod #RooCode #AIMethodologies #AgileDevelopment #SoftwareDevelopment #WorkflowAutomation #VSCode #NodeJS #TechGuide #SoftwareEfficiency</p>
]]></content:encoded></item><item><title><![CDATA[Installing Proxmox VE on a Spare PC.]]></title><description><![CDATA[TL;DR.
This post is a comprehensive walk-through on how I install PVE (Proxmox Virtual Environment) on a spare PC. I cover the step-by-step installation process, and tips for optimizing the virtual environment. This article is ideal for tech enthusia...]]></description><link>https://solodev.app/installing-proxmox-ve-on-a-spare-pc</link><guid isPermaLink="true">https://solodev.app/installing-proxmox-ve-on-a-spare-pc</guid><category><![CDATA[ProxmoxVE]]></category><category><![CDATA[IntelNUC]]></category><category><![CDATA[ServerCluster]]></category><category><![CDATA[proxmox]]></category><category><![CDATA[virtualization]]></category><category><![CDATA[Homelab]]></category><category><![CDATA[containers]]></category><category><![CDATA[Virtual Machines]]></category><category><![CDATA[networking]]></category><category><![CDATA[serversetup]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Ubuntu]]></category><category><![CDATA[ #TechGuide ]]></category><category><![CDATA[pve]]></category><dc:creator><![CDATA[Brian King]]></dc:creator><pubDate>Wed, 19 Nov 2025 11:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763582854257/08b76608-bda7-4561-99a8-6abb859202b6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-tldr">TL;DR.</h2>
<p>This post is a comprehensive walk-through on how I install PVE (Proxmox Virtual Environment) on a spare PC. I cover the step-by-step installation process, and tips for optimizing the virtual environment. This article is ideal for tech enthusiasts who want to maximize the capabilities of their Homelab by setting up a robust virtualization platform that supports both containers and virtual machines.</p>
<blockquote>
<p><strong>Attributions:</strong></p>
<p><em>A post from</em> <a target="_blank" href="https://proxmox.com/en/"><em>Proxmox</em></a> <em>↗ on</em> <a target="_blank" href="https://forum.proxmox.com/threads/proxmox-beginner-tutorial-how-to-set-up-your-first-virtual-machine-on-a-secondary-hard-disk.59559/"><em>setting up a Proxmox VE virtual machine</em></a> <em>↗, and</em></p>
<p>A video from <a target="_blank" href="https://www.youtube.com/@Tailscale">Tailscale</a> <em>↗</em> about <a target="_blank" href="https://www.youtube.com/watch?v=zngSuqCM4d8">installing Proxmox VE on a PC</a> <strong><em>↗.</em></strong></p>
</blockquote>
<hr />
<h2 id="heading-an-introduction">An Introduction.</h2>
<p>Containers and virtual machines are technologies that allow operating systems and applications to be isolated within a runtime environment. Depending on the hardware specifications, PVE allows multiple containers and virtual machines to run on a single PC:</p>
<blockquote>
<p>The purpose of this post is to demonstrate how I install PVE and create a container.</p>
</blockquote>
<hr />
<h2 id="heading-the-big-picture">The Big Picture.</h2>
<p>Learn how I efficiently install PVE on a spare PC with this comprehensive guide. Discover the prerequisites, step-by-step installation process, and tips that I use to optimize my virtual environment setup. PVE is perfect for tech enthusiasts looking to maximize their Homelab PCs.</p>
<hr />
<h2 id="heading-prerequisites">Prerequisites.</h2>
<ul>
<li><p>A Debian-based Linux distro (I use Ubuntu),</p>
</li>
<li><p>A USB Thumb Drive.</p>
</li>
</ul>
<hr />
<h2 id="heading-updating-my-base-system">Updating my Base System.</h2>
<ul>
<li>From the (base) terminal, I update my (base) system:</li>
</ul>
<pre><code class="lang-python">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<blockquote>
<p>NOTE: The Ollama LLM manager is already installed on my (base) system.</p>
</blockquote>
<hr />
<h2 id="heading-what-are-the-specs-for-my-spare-pc">What are the Specs for my Spare PC?</h2>
<p>An Intel NUC (Next Unit of Computing) is a small-form-factor computer designed by Intel, which offers a compact and powerful computing solution. This PC typically comes without RAM, storage, or an operating system, allowing me to customize the hardware according to my needs.</p>
<h3 id="heading-nuc-specifications">NUC Specifications.</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Model</td><td>BXNUC10i3FNHN</td></tr>
</thead>
<tbody>
<tr>
<td>Processor</td><td>Intel i3-10110U 2.10GHz Dual Core, 4 Threads, Up to 4.10GHz, 4MB SmartCache</td></tr>
<tr>
<td>Memory</td><td>Dual Channel, 2x DDR4-2666 SODIMM slots, 1.2V</td></tr>
<tr>
<td>Graphics</td><td>Intel UHD Graphics, 1x HDMI 2.0a Port, 1x USB 3.1 Gen 2 (10 Gbps), DisplayPort 1.2 via USB-C</td></tr>
<tr>
<td>Audio</td><td>Up to 7.1 surround audio via HDMI or DisplayPort signals, Headphone/microphone jack on the front panel, dual array front mics on the chassis front</td></tr>
<tr>
<td>Peripheral Connectivity</td><td>1x HDMI 2.0 Port with 4K at 60Hz, 1x USB 3.1 Gen 2 (10 Gbps), DisplayPort 1.2 via USB-C, 1x Front USB 3.1 Type A (Gen 2) Port, 1x Front USB 3.1 Type-C (Gen 2) Port, 2x Rear USB 3.1 Type A (Gen 2), 2x Ethernet Ports, 2x Internal USB 2.0 via header</td></tr>
<tr>
<td>Bluetooth</td><td></td></tr>
<tr>
<td>Storage</td><td>1x M.2 22x42/80 (key M) slot for SATA3 or PCIe X4 Gen3 NVMe, SATA Interface, SDXC slot with UHS-II support</td></tr>
<tr>
<td>Networking</td><td>Intel Wi-Fi 6 AX201, Bluetooth, i219-V Gigabit Ethernet</td></tr>
<tr>
<td>Power Adapter</td><td>19VDC Power Adapter</td></tr>
</tbody>
</table>
</div><h3 id="heading-hardware-specifications">Hardware Specifications.</h3>
<table><tbody><tr><td><p>Storage</p></td><td><p>256GB M.2 internal (50GB/CT), 256GB SSD internal, 2TB HDD external</p></td></tr><tr><th><p>Memory</p></th><th><p>64GB (12288/CT, 2048/Swap)</p></th></tr><tr><td><p>OS</p></td><td><p>A modified Debian LTS kernel running under PVE</p></td></tr></tbody></table>

<hr />
<h2 id="heading-what-is-pve">What is PVE?</h2>
<p>PVE (Proxmox Virtual Environment) is an open-source virtualization platform designed for setting up hyper-converged infrastructure and, under the GNU AGPLv3 license, can be used for commercial purposes. It lets me deploy and manage containers and virtual machines. PVE is built on a modified Debian LTS (Long Term Service) kernel, and supports two types of virtualization: containers with LXC (Linux Containers) and virtual machines with KVM (Kernel-level Virtual Machines). PVE features a web-based management interface, and there is also a mobile app available for managing PVEs.</p>
<hr />
<h3 id="heading-creating-a-pve-installation-thumb-drive">Creating a PVE Installation Thumb Drive.</h3>
<ul>
<li><p>I download the PVE ISO file from <a target="_blank" href="https://proxmox.com/en/downloads/proxmox-virtual-environment/iso">https://proxmox.com/en/downloads/proxmox-virtual-environment/iso</a>.</p>
</li>
<li><p>I grab a 32GB thumb drive and label it.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748578190826/30b36cc8-7598-48ce-bddb-1b1f75a3033c.jpeg" alt class="image--center mx-auto" /></p>
<ul>
<li><p>I plug the thumb drive into my PC.</p>
</li>
<li><p>I start the <code>balenaEtcher-1.14.3-x64.AppImage</code> imaging utility that runs on Ubuntu.</p>
</li>
</ul>
<blockquote>
<p>NOTE: There are versions of Balena Etcher for Windows, macOS, and (x64 &amp; x86) Linux.</p>
</blockquote>
<ul>
<li>I select the 1.57GB ISO file as the source, the 32GB thumb drive as the target, and then I click the blue <code>Flash</code> button.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748578590914/39679041-2ebc-4190-b0bd-a7a0e1540655.png" alt class="image--center mx-auto" /></p>
<ul>
<li>After the ISO has been successfully flashed onto the thumb drive, I eject the thumb drive and remove it from my PC.</li>
</ul>
<h3 id="heading-installing-pve">Installing PVE.</h3>
<blockquote>
<p>NOTE: PVE requires at least 3 drives that are directly connected to the NUC. I have an internal 256GB M.2 drive that uses the NVMe interface labelled <code>prox-int-nvme</code>, an internal 256GB SSD that uses the SATA interface which has been split into 2 × 128GB partitions labelled <code>prox-int-sata1</code> &amp; <code>prox-int-sata2</code>, and an external 2TB HDD that uses the USB 3.0 interface labelled <code>prox-ext-usb3</code>. These configurations will be altered during the PVE setup process.</p>
</blockquote>
<ul>
<li><p>I plug the PVE installation thumb drive into the NUC.</p>
</li>
<li><p>I power up the NUC.</p>
</li>
<li><p>I follow the installation instructions.</p>
</li>
<li><p>I use the following network settings that work on my LAN:</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762487898522/3828686b-8196-4dc1-bb23-00c6ebc09d75.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>NOTE: During the Management Network Configuration setup, I can use IPv4 or IPv6 but I CANNOT mix the 2 protocols. The Management Interface is the name of the NIC (Network Interface Card) that is installed in the NUC which, in my case, is eno1. The Hostname (FQDN) only matters if I intend to open, and host, PVE over the Internet. The IP Address (CIDR) is 192.168.0.60/24. The PVE tells the router that this is the IP address it wants. The Gateway is the IP address of my router, which is 192.168.0.1, and is needed to connect PVE to the LAN. The DHCP server is found at 192.168.0.1 because my router includes the server that is responsible for assigning IP addresses.</p>
</blockquote>
<ul>
<li><p>After installation, the spare PC will reboot.</p>
</li>
<li><p>At this time, I remove the USB installation thumb drive.</p>
</li>
<li><p>At the login screen, I make a note of the PVE IP address and :port number that is displayed.</p>
</li>
<li><p>On a PC that is connected to the same network as PVE, I open a browser, visit the IP address and :port, and bookmark that address.</p>
</li>
<li><p>At the browser login screen, my user name is ‘root’ and my ‘password’ is the same one I gave during the installation.</p>
</li>
<li><p>From the terminal, I SSH into PVE with root@ip_address and password.</p>
</li>
</ul>
<hr />
<h2 id="heading-a-note-about-routers-and-dhcp-servers">A Note about Routers and DHCP Servers.</h2>
<p>My router has 2 jobs:</p>
<ul>
<li><p>Connect to an ISP (Internet Service Provider) that, in turn, provides access to the Internet, and</p>
</li>
<li><p>Route that connection to all the wired, and wireless, devices that share the LAN (Local Area Network).</p>
</li>
</ul>
<p>As the name suggests, Internet connectivity is <em>routed</em> to all of the linked devices in the LAN. There are many devices, like smart phones, tablets, PCs, notebooks, and others, that use an Internet connection to improve their functionality. Many devices, like smart TVs, <em>require</em> that connectivity.</p>
<p>The problem is that all the devices that connect to the LAN require unique identifiers. These identifiers are called IP addresses. But where does a device get an IP address? To solve the IP address problem, my router has a built-in DHCP server where DHCP stands for Dynamic Host Configuration Protocol. Almost all routers have a DHCP server and the purpose of this server is to assign a dynamic IP address to every wired, and wireless, device in the LAN.</p>
<p>In most cases, each device in the LAN is dynamically, i.e. automatically, assigned an IP address from a pool of available, unassigned addresses. Most often, devices will use the same dynamic IP addresses when they connect to the LAN, but sometimes the DHCP server will issue a new IP address. This is a fine solution and is NOT a problem. In most cases.</p>
<p>Servers, however, are special use cases. PVE, as well as the containers and virtual machines it manages, require <em>static</em> IP addresses. My NUC 10 will need static IP addresses if it, and the containers, want to be accessible in my local LAN. The reason I need IP addresses <em>that DO NOT CHANGE</em> is because I will setup a Kubernetes cluster and each node needs to know how to find each other. (Setting up a cluster is beyond the scope of this post.)</p>
<p>Replacing dynamic IP addresses with static IP addresses requires:</p>
<ul>
<li><p>Accessing my router and making changes to the DHCP settings for each container and virtual machine (which is <em>also</em> beyond the scope of this post), and</p>
</li>
<li><p>Reflecting those changes to each container and virtual machine running on PVE.</p>
</li>
</ul>
<hr />
<h2 id="heading-the-pve-server">The PVE Server.</h2>
<p>PVE (Proxmox Virtual Environment) is the server that hosts the containers and virtual machines I deploy.</p>
<hr />
<h3 id="heading-accessing-pve">Accessing PVE.</h3>
<ul>
<li>I use a browser to login to PVE:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762491437546/674cff6a-903b-449a-bd9c-885c314008c7.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>On the left of the screen, I go to <code>Datacenter &gt; nuclab60</code>.</p>
</li>
<li><p>In the 2nd pane, I click ‘Shell‘.</p>
</li>
</ul>
<hr />
<h3 id="heading-the-helper-script">The Helper Script.</h3>
<ul>
<li><p>In a new browser tab, I visit <a target="_blank" href="https://community-scripts.github.io/ProxmoxVE/">http://helper-scripts.com/</a>.</p>
</li>
<li><p>I search for ‘pve post install‘.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762490969910/44ff4e69-0f0d-4fbc-8aac-9494f3319a82.png" alt class="image--center mx-auto" /></p>
<ul>
<li>I copy the script command.</li>
</ul>
<hr />
<h3 id="heading-running-the-helper-script">Running the Helper Script.</h3>
<ul>
<li>Back in the Shell for nuclab60, I run the helper script command from the terminal:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762492508747/308667bd-d3eb-4cae-849e-42fffe6c1712.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>NOTE: I answer ‘yes’ to MOST of the questions when asked but there are 3 exceptions, as listed below.</p>
</blockquote>
<ul>
<li>I answer ‘no’ to ‘Disable high availability?’:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749451321563/2c94ed4f-12a3-4e6c-8da4-f39afc64b811.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>NOTE: High availability will be used when other nodes are created.</p>
</blockquote>
<ul>
<li>I answer ‘no’ to ‘Update Proxmox VE now?‘:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749451488029/9440bbae-c167-4acb-9e48-6d8cd2e6c6fc.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>NOTE: I will update PVE manually later in this post.</p>
</blockquote>
<ul>
<li>I answer ‘no’ to ‘Reboot Proxmox VE now?‘:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749451626813/9ee7cf61-d8db-4a69-9497-d6b48177c869.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>NOTE: I will reboot once I finish updating the remote system and upgrading PVE.</p>
</blockquote>
<ul>
<li>Once the script has finished, I update the system:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762492721385/7aa75a5a-9840-4c12-b3f8-110c06912454.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>NOTE: The ‘sudo’ command is not required as the root account has full privileges.</p>
</blockquote>
<ul>
<li>Once the updates have been downloaded, I run the ‘pveupgrade‘ command to update the system and the PVE installation:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762492827888/1c0d9bdb-0f64-4158-a301-656fd97aab91.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Due to the installation of a kernel update, I need to reboot PVE:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762493073753/3d8b8b81-c5d3-459d-ae2d-e82f094d047f.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-preparing-pve">Preparing PVE.</h2>
<p>Before I can create any containers or virtual machines, I need to prepare PVE and the assets it will use.</p>
<hr />
<h3 id="heading-downloading-an-os-to-pve">Downloading an OS to PVE.</h3>
<ul>
<li><p>I use a browser to login to PVE.</p>
</li>
<li><p>On the left of the screen, under Server View, I go to <code>Datacenter &gt; nuclab60 &gt; local (nuclab60)</code>.</p>
</li>
<li><p>In the 2nd pane, I click <code>ISO Images</code>.</p>
</li>
<li><p>In the 3rd pane, I click the grey <code>Download from URL</code> button:</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762550929041/a1a1eafc-cd6b-4b82-9525-387d8485be5b.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>In the pop-up modal, I add <a target="_blank" href="https://releases.ubuntu.com/24.04.3/ubuntu-24.04.3-live-server-amd64.iso"><code>https://releases.ubuntu.com/24.04.3/ubuntu-24.04.3-live-server-amd64.iso</code></a> to the <code>URL:</code> field so that PVE can download the ISO for Ubuntu Server 24.04.2 LTS.</p>
</li>
<li><p>I click the blue <code>Query URL</code> button to check the link:</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762551133028/79464be1-463d-459e-af6b-63d436e0bfb4.png" alt class="image--center mx-auto" /></p>
<ul>
<li>I click the blue <code>Download</code> button to start the download:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762551187522/dec4a9ea-6852-4a6c-8def-09436d5bd24b.png" alt class="image--center mx-auto" /></p>
<ul>
<li>I close the modal once I receive the ‘TASK OK’ message:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762551720898/1cc54434-e656-4592-9b05-37ea637754a7.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-preparing-the-disks"><strong>Preparing the Disks</strong>.</h3>
<blockquote>
<p>NOTE: The following is adapted from the instructions provided by the <a target="_blank" href="https://forum.proxmox.com/threads/proxmox-beginner-tutorial-how-to-set-up-your-first-virtual-machine-on-a-secondary-hard-disk.59559/">PVE team</a>.</p>
</blockquote>
<ul>
<li><p>I use a browser to login to PVE.</p>
</li>
<li><p>On the left of the screen, under Server View, I go to <code>Datacenter &gt; pve</code>.</p>
</li>
<li><p>In the 2nd pane, I click <code>Disks</code>.</p>
</li>
<li><p>In the 3rd pane, I select the <code>dev/sda</code> drive (that currently has 2 partitions).</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762554101378/327bc127-931d-4abc-87c3-c2a10bf15eec.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>I click the grey <code>Wipe Disk</code> button.</p>
</li>
<li><p>In the Confirm modal, I click the blue <code>Yes</code> button.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762555505727/d3ac8cd5-f7ce-42a2-8e4c-629a8211533a.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>I repeat the process for the <code>/dev/sdb</code> disk.</p>
</li>
<li><p>Back in the 2nd pane, I click <code>Disks &gt; ZFS</code>.</p>
</li>
<li><p>In the 3rd pane, I click the grey <code>Create: ZFS</code> button:</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762425971626/4d7d68d7-0bd4-42fa-88b6-ed5d94555007.png" alt class="image--center mx-auto" /></p>
<ul>
<li>In the <code>Create: ZFS</code> modal, I add the following details and then click the blue ‘Create‘ button:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762426262241/b235add3-ef6c-4ec8-9c1b-03cab15e839d.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>NOTE: The <code>/dev/sdb</code> disk is an external SDD that uses the USB 3.0 interface.</p>
</blockquote>
<h3 id="heading-installing-a-container-template">Installing a Container Template.</h3>
<ul>
<li><p>I use a browser to login to PVE.</p>
</li>
<li><p>On the left of the screen, under Server View, I go to <code>Datacenter &gt; pve &gt; local (pve)</code>.</p>
</li>
<li><p>In the 2nd pane, I click <code>CT Templates</code>.</p>
</li>
<li><p>In the 3rd pane, I click the grey <code>Templates</code> button.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762555737511/be9cd9f8-9d45-4ed9-9e4c-083cfabf606a.png" alt class="image--center mx-auto" /></p>
<ul>
<li>In the Templates modal, I select the <code>ubuntu-24.04-standard</code> template and click the blue <code>Download</code> button:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762555933959/94fff826-af15-4025-9964-1183783b15cf.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>Now that all of the requirements are in place, I can take the next step by clicking the blue <code>Create CT</code> button (top-right of the screen), following the resulting prompts, and creating a container.</p>
</blockquote>
<hr />
<h2 id="heading-creating-a-new-container">Creating a New Container</h2>
<p>This container is built to be cloned. As such, there is a lot of effort that goes into building it.</p>
<hr />
<h3 id="heading-create-ct">“Create CT.”</h3>
<ul>
<li><p>I use a browser to login to PVE.</p>
</li>
<li><p>At the top-right of the screen, I click the blue ‘Create CT‘ button:</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762558404220/01f4f41d-36a6-44af-9196-fdc66cdd3a39.png" alt class="image--center mx-auto" /></p>
<ul>
<li>I add details to the ‘General’ tab, then I click the blue ‘Next’ button:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762558556205/07dc05cd-d20f-42b1-aa86-997f1426381d.png" alt class="image--center mx-auto" /></p>
<ul>
<li>I select the ‘Template:’, then I click the blue ‘Next’ button:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762558699395/68981f3e-3df6-4c5e-b79f-30cd11e1cddd.png" alt class="image--center mx-auto" /></p>
<ul>
<li>I select the ‘Storage:‘ (zfs-disk), set the size (56GB), then I click the blue ‘Next’ button:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762559126740/052da609-a2e3-4c2f-a6cc-31caf1360e16.png" alt class="image--center mx-auto" /></p>
<ul>
<li>I leave the ‘Cores:‘ set to 1, then I click the blue ‘Next‘ button:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762559170147/84c51057-a488-4fa4-a955-abb032895592.png" alt class="image--center mx-auto" /></p>
<ul>
<li>I set the ‘Memory (MiB):’ (12288), the ‘Swap (MiB):’ (4096), then I click the blue ‘Next‘ button:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762559235214/ddd9768b-fc2f-4d35-a6b4-27b05c51acea.png" alt class="image--center mx-auto" /></p>
<ul>
<li>I set the ‘IPv4/CIDR:’, ‘Gateway (IPv4):’, then I click the blue ‘Next’ button:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762559308645/3ba179a5-aef7-4cb9-b173-d4fc253d6ea3.png" alt class="image--center mx-auto" /></p>
<ul>
<li>I leave the DNS settings blank, then I click the blue ‘Next’ button:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762559352433/24f69e7d-8a78-4817-a467-9ae65baebded.png" alt class="image--center mx-auto" /></p>
<ul>
<li>I check my settings, then I click the blue ‘Finish’ button:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762559398955/a72906ce-494a-448f-a582-82ec3865bbb2.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-start-at-boot">“Start at boot.”</h3>
<ul>
<li><p>I use a browser to login to PVE.</p>
</li>
<li><p>On the left of the screen, under Server View, I go to <code>Datacenter &gt; nuclab60 &gt; 101 (nuclab61)</code>.</p>
</li>
<li><p>In the 2nd pane, I click <code>Options</code>.</p>
</li>
<li><p>In the 3rd pane, I select “Start at boot“ from the list and click the gray <code>Edit</code> button.</p>
</li>
<li><p>In the “Edit: Start at boot“ modal, I tick the “Start at boot” option followed by the blue <code>OK</code> button:</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763546634367/623125be-b6e2-4f48-9bc9-fb7288798f2c.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-creating-a-user-account-for-the-container">Creating a User Account for the Container.</h3>
<ul>
<li><p>I use a browser to login to PVE.</p>
</li>
<li><p>On the left of the screen, under Server View, I go to <code>Datacenter &gt; nuclab60 &gt; 101 (nuclab61</code>):</p>
</li>
<li><p>I start the container.</p>
</li>
<li><p>In the 2nd pane, I click <code>Console</code>.</p>
</li>
<li><p>In the 3rd pane, I login to the container using root as the <code>nuclab61 login:</code> and the password I created while building the container.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763548852242/34220966-59d6-4c92-8df5-6a66279d3b49.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Once logged in, I create a new user account:</li>
</ul>
<pre><code class="lang-bash">adduser brian
</code></pre>
<ul>
<li>I add the new user to the 'sudo' group:</li>
</ul>
<pre><code class="lang-bash">usermod -aG sudo brian
</code></pre>
<ul>
<li>I log out of the root account for the container:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">logout</span>
</code></pre>
<ul>
<li>Towards the top-right of the console window, I open the drop down menu of the <code>Shutdown</code> option, by clicking the down arrow (⌄), and selecting <code>Reboot</code>:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763549124333/0e716d01-684e-4b1e-bb97-cf9f6373eb7a.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>Once the reboot is complete, I switch to a terminal on my local PC.</p>
</blockquote>
<hr />
<h2 id="heading-creating-an-rsa-key-pair-on-the-local-pc">Creating an RSA Key Pair on the Local PC.</h2>
<ul>
<li>From a terminal (<code>CTRL</code> + <code>ALT</code> + <code>T</code>) on my local PC, I start the ssh-agent:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">eval</span> <span class="hljs-string">"<span class="hljs-subst">$(ssh-agent -s)</span>"</span>
</code></pre>
<ul>
<li>I generate a pair of RSA keys called "/home/brian/.ssh/key-name" (where I replace "key-name" with the name of the remote server):</li>
</ul>
<pre><code class="lang-bash">ssh-keygen -b 4096
</code></pre>
<blockquote>
<p>NOTE: It is my convention to name RSA keys after the remote server on which they will be used.</p>
</blockquote>
<ul>
<li>I add the SSH key to my workstation account (where I replace "key-name" with the <em>actual</em> name of the ssh key):</li>
</ul>
<pre><code class="lang-bash">ssh-add /home/brian/.ssh/nuclab61
</code></pre>
<hr />
<h3 id="heading-uploading-the-public-key-to-the-remote-container">Uploading the Public Key to the Remote Container.</h3>
<ul>
<li>From the <code>workstation</code> terminal (<code>CTRL</code> + <code>ALT</code> + <code>T</code>), I use "ssh-copy-id" to upload the locally-generated public key to the remote container (where I replace "container-name" with the <em>actual</em> name of the container):</li>
</ul>
<pre><code class="lang-bash">ssh-copy-id -i /home/brian/.ssh/nuclab61.pub brian@192.168.0.61
</code></pre>
<hr />
<h3 id="heading-ssh-folder-and-file-permissions">SSH Folder and File Permissions.</h3>
<ul>
<li>Change the permission for the .ssh folder:</li>
</ul>
<pre><code class="lang-bash">chmod 0700 /home/brian/.ssh
</code></pre>
<ul>
<li>Change the permission for the private key:</li>
</ul>
<pre><code class="lang-bash">chmod 0600 /home/brian/.ssh/nuclab61
</code></pre>
<ul>
<li>Change the permission for the public key:</li>
</ul>
<pre><code class="lang-bash">chmod 0644 /home/brian/.ssh/nuclab61.pub
</code></pre>
<hr />
<h2 id="heading-special-notes">SPECIAL NOTES</h2>
<p>The "Permission denied (publickey)" error indicates that your SSH connection was rejected because the server could not authenticate your public key. To resolve this, ensure your public key is correctly added to your account on the server and that your private key has the correct file permissions.</p>
<h3 id="heading-understanding-the-permission-denied-publickey-error">Understanding the "Permission Denied (publickey)" Error</h3>
<p>The "Permission denied (publickey)" error occurs when your SSH client cannot authenticate with the server using the provided public key. This can happen for several reasons.</p>
<h3 id="heading-common-causes-and-solutions">Common Causes and Solutions</h3>
<h3 id="heading-1-incorrect-ssh-key-configuration">1. Incorrect SSH Key Configuration</h3>
<ul>
<li><p><strong>Public Key Not Added</strong>: Ensure your public key is added to the server's <code>~/.ssh/authorized_keys</code> file.</p>
</li>
<li><p><strong>Key Format</strong>: Verify that the key is in the correct format and not corrupted.</p>
</li>
</ul>
<h3 id="heading-2-ssh-key-permissions">2. SSH Key Permissions</h3>
<ul>
<li><p><strong>File Permissions</strong>: The permissions for your SSH keys must be set correctly:</p>
<ul>
<li><p>Private key: <code>chmod 600 ~/.ssh/id_rsa</code></p>
</li>
<li><p>Public key: <code>chmod 644 ~/.ssh/id_</code><a target="_blank" href="http://rsa.pub"><code>rsa.pub</code></a></p>
</li>
<li><p><code>.ssh</code> directory: <code>chmod 700 ~/.ssh</code></p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-3-ssh-agent-issues">3. SSH Agent Issues</h3>
<ul>
<li><p><strong>SSH Agent Not Running</strong>: Start the SSH agent with <code>eval "$(ssh-agent -s)"</code>.</p>
</li>
<li><p><strong>Key Not Loaded</strong>: Add your private key to the agent using <code>ssh-add ~/.ssh/id_rsa</code>.</p>
</li>
</ul>
<h3 id="heading-4-connection-user">4. Connection User</h3>
<ul>
<li><strong>Correct User</strong>: Always connect using the "git" user for GitHub or the appropriate user for your server. For example, use <code>ssh -T</code> <a target="_blank" href="mailto:git@github.com"><code>git@github.com</code></a>.</li>
</ul>
<h3 id="heading-additional-troubleshooting-steps">Additional Troubleshooting Steps</h3>
<ul>
<li><p><strong>Verbose Mode</strong>: Use <code>ssh -v user@host</code> to get detailed output about the connection process. This can help identify where the failure occurs.</p>
</li>
<li><p><strong>Firewall or Network Issues</strong>: Ensure that your network allows SSH connections and that the server's firewall is not blocking your access.</p>
</li>
</ul>
<p>By following these steps, you should be able to resolve the "Permission denied (publickey)" error and successfully connect to your server.</p>
<hr />
<h3 id="heading-logging-in-to-the-remote-container">Logging In to the Remote Container.</h3>
<ul>
<li>From the terminal (CTRL + ALT + T), I login to the account of the remote server:</li>
</ul>
<pre><code class="lang-bash">ssh -i /home/brian/.ssh/nuclab61 <span class="hljs-string">'brian@192.168.0.61'</span>
</code></pre>
<ul>
<li>For ‘Too many authentication failures‘, use the following:</li>
</ul>
<pre><code class="lang-bash">ssh -o IdentitiesOnly=yes brian@192.168.0.61
</code></pre>
<hr />
<h2 id="heading-preparing-the-container">Preparing the Container.</h2>
<p>The next step is to prepare the container for cloning.</p>
<hr />
<h3 id="heading-updating-the-container">Updating the Container.</h3>
<ul>
<li>I update Ubuntu:</li>
</ul>
<pre><code class="lang-python">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<hr />
<h3 id="heading-installing-the-unattended-upgrades-utility">Installing the Unattended Upgrades Utility.</h3>
<ul>
<li>I install the <code>unattended-upgrades</code> package:</li>
</ul>
<pre><code class="lang-bash">sudo apt install unattended-upgrades
</code></pre>
<ul>
<li>I manually trigger an Unattended Upgrade:</li>
</ul>
<pre><code class="lang-bash">sudo unattended-upgrade
</code></pre>
<blockquote>
<p>NOTE: -d is the switch for running this command in debug mode.</p>
</blockquote>
<ul>
<li>I check the Unattended Updates log to ensure everything is worked as expected:</li>
</ul>
<pre><code class="lang-bash">sudo cat /var/<span class="hljs-built_in">log</span>/unattended-upgrades/unattended-upgrades.log
</code></pre>
<hr />
<h3 id="heading-hardening-the-container">Hardening the Container.</h3>
<ul>
<li>I open the "sshd_config" file:</li>
</ul>
<pre><code class="lang-bash">sudo nano /etc/ssh/sshd_config
</code></pre>
<ul>
<li>I add (CTRL + V) the following to the bottom of the "sshd_config" page, save (CTRL + S) the changes, and exit (CTRL + X) the Nano text editor:</li>
</ul>
<pre><code class="lang-bash">PasswordAuthentication no
PermitRootLogin no
Protocol 2
</code></pre>
<ul>
<li>I restart the "ssh" service:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl restart ssh.service
</code></pre>
<hr />
<h3 id="heading-enabling-and-setting-up-ufw-on-the-container">Enabling, and Setting Up, UFW on the Container.</h3>
<ul>
<li>I check the UFW status:</li>
</ul>
<pre><code class="lang-bash">sudo ufw status
</code></pre>
<ul>
<li>I enable the UFW:</li>
</ul>
<pre><code class="lang-bash">sudo ufw <span class="hljs-built_in">enable</span>
</code></pre>
<ul>
<li>I install a UFW rule:</li>
</ul>
<pre><code class="lang-bash">sudo ufw allow from 192.168.0.2
</code></pre>
<blockquote>
<p>NOTE: I specify the IP address of the PC from which I will connect using SSH***.***</p>
</blockquote>
<ul>
<li>I check the status of the UFW and list the rules by number:</li>
</ul>
<pre><code class="lang-bash">sudo ufw status numbered
</code></pre>
<blockquote>
<p>NOTE 1: UFW will, by default, block all incoming traffic, including SSH and HTTP.</p>
<p>NOTE 2: I will update the UFW rules as I deploy other services to the remote server.</p>
</blockquote>
<ul>
<li>I can delete a UFW rule by number if needed:</li>
</ul>
<pre><code class="lang-bash">sudo ufw delete 1
</code></pre>
<ul>
<li>I can also disable UFW if needed:</li>
</ul>
<pre><code class="lang-bash">sudo ufw <span class="hljs-built_in">disable</span>
</code></pre>
<hr />
<h3 id="heading-installing-and-setting-up-fail2ban-on-the-container">Installing, and Setting Up, Fail2Ban on the Container.</h3>
<ul>
<li>I install Fail2Ban:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y fail2ban
</code></pre>
<ul>
<li>I copy the <code>jail.conf</code> file as <code>jail.local</code>:</li>
</ul>
<pre><code class="lang-bash">sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
</code></pre>
<ul>
<li>I open the <code>jail.local</code> file in Nano:</li>
</ul>
<pre><code class="lang-bash">sudo nano /etc/fail2ban/jail.local
</code></pre>
<ul>
<li>I make the following changes to a few (SSH-centric) settings in the <code>jail.local</code> file, then I save (CTRL + S) those changes, and exit (CTRL + X) the Nano text editor:</li>
</ul>
<pre><code class="lang-bash">[DEFAULT]
⋮
bantime = 30m
ignoreip = 127.0.0.1/8 your_ip_address
⋮
[sshd]
enabled = <span class="hljs-literal">true</span>
port = ssh,22
</code></pre>
<ul>
<li>I restart Fail2Ban:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl restart fail2ban
</code></pre>
<ul>
<li>I check the Fail2ban whitelist:</li>
</ul>
<pre><code class="lang-bash">sudo fail2ban-client status
</code></pre>
<ul>
<li>I check the status of Fail2Ban:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl status fail2ban
</code></pre>
<ul>
<li>I enable Fail2Ban to auto-start on boot:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl <span class="hljs-built_in">enable</span> fail2ban
</code></pre>
<ul>
<li>I reboot the container:</li>
</ul>
<pre><code class="lang-bash">sudo reboot
</code></pre>
<hr />
<h2 id="heading-clone">“Clone.”</h2>
<p>A clone is a direct, functional copy of a container that includes all of the settings from the original. After making the clones, I will adjust the settings of each so they will function correctly.</p>
<hr />
<h3 id="heading-cloning-the-container">Cloning the Container.</h3>
<ul>
<li><p>I use a browser to login to PVE.</p>
</li>
<li><p>On the left of the screen, under Server View, I go to <code>Datacenter &gt; nuclab60</code>.</p>
</li>
<li><p>In the second pane, I select “Search”.</p>
</li>
<li><p>In the third pane, I right-click the ‘101 (nuclab61)’ container and, in the pop-up menu, I click the ‘Shutdown’ option if the container is running.</p>
</li>
<li><p>In the third pane, I right-click the ‘101 (nuclab61)’ container and, in the pop-up menu, I click the ‘Clone’ option:</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762559530710/07abcc6a-5f52-425a-8f88-c11f0d50b77f.png" alt class="image--center mx-auto" /></p>
<ul>
<li>In the ‘Clone CT 101 (nuclab61)’ modal, I enter the following details, then I click the blue ‘Clone’ button:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762559671586/b28d6ea9-46df-41f8-ad3f-c137b9f84724.png" alt class="image--center mx-auto" /></p>
<ul>
<li>After a moment, a clone of the original container appears under <code>Datacenter &gt; nuclab60</code>:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762559913550/f7a26f78-bf32-4197-bb75-7705b80f25a8.png" alt class="image--center mx-auto" /></p>
<ul>
<li>I repeat this process two more times to meet my requirements.</li>
</ul>
<hr />
<h3 id="heading-network-settings-for-each-clone">Network Settings for Each Clone.</h3>
<ul>
<li><p>I use a browser to login to PVE.</p>
</li>
<li><p>On the left of the screen, under Server View, I go to <code>Datacenter &gt; nuclab60 &gt; 101 (nuclab61)</code>.</p>
</li>
<li><p>In the 2nd pane, I click <code>Network</code>.</p>
</li>
<li><p>In the 3rd pane, I select the Network Device and click the gray, <code>Edit</code> button:</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763326487766/76e9a27b-02fb-48ca-bca6-9d6095ca8a36.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>In the ‘Edit: Network Device’ modal, I change:</p>
<ul>
<li><p>the ‘Name:’,</p>
</li>
<li><p>Ensure the ‘IPv4:’ radio button is set to ‘Static’,</p>
</li>
<li><p>Ensure the ‘IPv4/CIDR:’ setting is correct, and</p>
</li>
<li><p>Ensure the ‘Gateway (IPv4):’ setting is correct:</p>
</li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763326666045/728c477e-d325-47ba-9024-3b0173e7ba42.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Once I confirm these settings, I click the blue <code>OK</code> button and repeat the process for the three remaining containers.</li>
</ul>
<hr />
<h2 id="heading-setting-up-the-local-terminal">Setting Up the Local Terminal.</h2>
<p>The following describes:</p>
<ul>
<li><p>How to locally generate an RSA Key Pair for an SSH connection,</p>
</li>
<li><p>Pushing the public key to the container,</p>
</li>
<li><p>Logging in to the remote container,</p>
</li>
<li><p>Updating the OS that is running on the container,</p>
</li>
<li><p>Hardening the container by changing a few settings, and</p>
</li>
<li><p>Installing and enabling security utilities like UFW and Fail2Ban.</p>
</li>
</ul>
<blockquote>
<p>NOTE: These operations need to be performed for (and on) each container.</p>
</blockquote>
<hr />
<h2 id="heading-the-results">The Results.</h2>
<p>Installing PVE (Proxmox Virtual Environment) onto a spare PC results in a compact and efficient solution for creating, and managing, containers and virtual environments. The process involves preparing the installation hardware, creating a USB drive that is used as the installation media, and configuring the system to suit my network and storage needs. By following the steps above, I can set up a robust virtualization platform that supports both containers and virtual machines. This setup not only maximizes the capabilities of the spare PC but also offers flexibility and scalability for various computing tasks.</p>
<hr />
<h2 id="heading-in-conclusion">In Conclusion.</h2>
<p>In this guide, I created a USB installation thumb drive for PVE, installed PVE onto a spare PC, learned how to download an OS to PVE, installed CT templates, created a container, created a new account for that container, and cloned that container multiple times. By following these steps, I maximized the capabilities of the spare PC and now enjoy a robust virtualization platform that supports both containers and virtual machines. This setup offers flexibility and scalability for various computing tasks, making it perfect for tech enthusiasts and professionals alike.</p>
<p>Have you tried setting up PVE on a spare PC? What challenges did you face? How did you overcome those challenges? Let's discuss in the comments below!</p>
<p>Until next time: Be safe, be kind, be awesome.</p>
<hr />
<h2 id="heading-hash-tags">Hash Tags.</h2>
<p>#ProxmoxVE #pve #IntelNUC #Virtualization #Homelab #Containers #VirtualMachines #Networking #ServerSetup #ServerCluster #Linux #Debian #Ubuntu #TechGuide</p>
]]></content:encoded></item><item><title><![CDATA[Installing the WinBoat AppImage.]]></title><description><![CDATA[TL;DR.
This post provides a step-by-step guide on installing the WinBoat AppImage on a Debian-based Linux system, such as Ubuntu. It covers prerequisites, system updates, and the installation steps, enabling me to run Windows applications seamlessly ...]]></description><link>https://solodev.app/installing-the-winboat-appimage</link><guid isPermaLink="true">https://solodev.app/installing-the-winboat-appimage</guid><category><![CDATA[WinBoat]]></category><category><![CDATA[WindowsOnLinux]]></category><category><![CDATA[SoftwareAccess]]></category><category><![CDATA[LinuxApps]]></category><category><![CDATA[Ubuntu]]></category><category><![CDATA[Linux]]></category><category><![CDATA[appimage]]></category><category><![CDATA[Seamless Integration]]></category><category><![CDATA[Tech Tutorial]]></category><category><![CDATA[Productivity]]></category><dc:creator><![CDATA[Brian King]]></dc:creator><pubDate>Sat, 04 Oct 2025 09:00:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763599523003/a4129026-7391-4bcf-aef3-9b90121e6a8a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-tldr">TL;DR.</h2>
<p>This post provides a step-by-step guide on installing the WinBoat AppImage on a Debian-based Linux system, such as Ubuntu. It covers prerequisites, system updates, and the installation steps, enabling me to run Windows applications seamlessly on Linux for enhanced productivity and software access.</p>
<blockquote>
<p><strong>Attributions:</strong></p>
<p><a target="_blank" href="https://www.winboat.app/"><strong><em>WinBoat.app</em></strong></a> <strong><em>↗.</em></strong></p>
</blockquote>
<hr />
<h2 id="heading-an-introduction">An Introduction.</h2>
<p>This post is a comprehensive walk-through for installing WinBoat AppImage on Ubuntu 24.04 LTS, enabling Windows applications to run in a Linux environment:</p>
<blockquote>
<p>The purpose of this post is to allow access to Win11 programs within Ubuntu.</p>
</blockquote>
<hr />
<h2 id="heading-the-big-picture">The Big Picture.</h2>
<p>WinBoat on Ubuntu bridges the gap between Linux and Windows environments, allowing me to leverage the strengths of both operating systems for enhanced productivity and software versatility.</p>
<hr />
<h2 id="heading-prerequisites">Prerequisites.</h2>
<ul>
<li>A Debian-based Linux distro (I use Ubuntu).</li>
</ul>
<hr />
<h2 id="heading-updating-my-base-system">Updating my Base System.</h2>
<ul>
<li>From the (base) terminal, I update my (base) system:</li>
</ul>
<pre><code class="lang-python">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<blockquote>
<p>NOTE: The Ollama LLM manager is already installed on my (base) system.</p>
</blockquote>
<hr />
<h2 id="heading-what-is-winboat">What is WinBoat?</h2>
<p>WinBoat is an application that allows users to run Windows applications on Linux with seamless integration, making them feel like native apps. It uses a containerized approach to run Windows in a virtual machine, enabling access to a wide range of Windows software within a Linux environment.</p>
<hr />
<h2 id="heading-installing-winboat">Installing WinBoat.</h2>
<ul>
<li>From the terminal, I install the libfuse2 library:</li>
</ul>
<pre><code class="lang-bash">sudo apt install libfuse2
</code></pre>
<blockquote>
<p>NOTE: AppImages rely on FUSE (Filesystem in Userspace) to function properly.</p>
</blockquote>
<ul>
<li><p>From a browser, I download the AppImage file from the <a target="_blank" href="https://www.winboat.app/">WinBoat.app</a> website.</p>
</li>
<li><p>From the file manager, I move the WinBoat app to it’s own directory.</p>
</li>
<li><p>I copy the WinBoat logo to the WinBoat directory:</p>
</li>
</ul>
<blockquote>
<p>NOTE: I downloaded the WinBoat PNG logo from the Internet.</p>
</blockquote>
<ul>
<li>From the terminal, I make the AppImage an executable:</li>
</ul>
<pre><code class="lang-bash">chmod +x /media/brian/Downloads/Ubuntu/WinBoat/winboat-0.8.5-x86_64.AppImage
</code></pre>
<ul>
<li>I use the Nano text editor to create a desktop entry:</li>
</ul>
<pre><code class="lang-bash">nano ~/.<span class="hljs-built_in">local</span>/share/applications/winboat.desktop
</code></pre>
<ul>
<li>I paste (CTRL + SHIFT + V) the following into the desktop entry, save (CTRL + S) the changes, and exit (CTRL + X) the Nano text editor:</li>
</ul>
<pre><code class="lang-bash">[Desktop Entry]
Name=WinBoat
Exec=/media/brian/Downloads/Ubuntu/WinBoat/winboat-0.8.5-x86_64.AppImage --no-sandbox
Icon=/media/brian/Downloads/Ubuntu/WinBoat/winboat-logo.png
Type=Application
Categories=OS;Distro
</code></pre>
<ul>
<li>From the apps menu, I pin the WinBoat app to the Dash.</li>
</ul>
<hr />
<h2 id="heading-the-results">The Results.</h2>
<p>Installing the WinBoat AppImage on a Debian-based Linux system, such as Ubuntu, allows me to seamlessly run Windows applications as if they were native to Linux. By following the outlined steps, including updating the system, installing necessary libraries, and setting up the AppImage, I can enjoy the benefits of both operating systems. This integration not only enhances productivity but also expands the range of available software, making it a valuable tool for both Linux and Windows applications.</p>
<hr />
<h2 id="heading-in-conclusion">In Conclusion.</h2>
<p>I learned how to seamlessly install the WinBoat AppImage on a Debian-based Linux system like Ubuntu. This post covered prerequisites, system updates, and step-by-step installation, enabling me to run Windows applications natively on Linux for enhanced productivity and software access.</p>
<p>Until next time: Be safe, be kind, be awesome.</p>
<hr />
<h2 id="heading-hash-tags">Hash Tags.</h2>
<p>#WinBoat #Linux #Ubuntu #AppImage #WindowsOnLinux #SeamlessIntegration #TechTutorial #Productivity #SoftwareAccess #LinuxApps</p>
]]></content:encoded></item><item><title><![CDATA[How, and Why, I use System Images.]]></title><description><![CDATA[TL;DR.
Using system images allows for efficient computer management by separating system files from personal data, ensuring easy recovery and enhancing productivity. Regularly updating and creating production images facilitates quick restoration and ...]]></description><link>https://solodev.app/how-and-why-i-use-system-images</link><guid isPermaLink="true">https://solodev.app/how-and-why-i-use-system-images</guid><category><![CDATA[SystemImages]]></category><category><![CDATA[ComputerManagement]]></category><category><![CDATA[CloneZilla]]></category><category><![CDATA[TechAdaptability]]></category><category><![CDATA[SystemBackup]]></category><category><![CDATA[EfficientComputing]]></category><category><![CDATA[data recovery]]></category><category><![CDATA[Productivity]]></category><category><![CDATA[techtips]]></category><category><![CDATA[Data Backup]]></category><dc:creator><![CDATA[Brian King]]></dc:creator><pubDate>Wed, 10 Sep 2025 23:06:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757544734031/b6a97fcc-1c6b-4c9a-89c5-548e7adf8423.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-tldr">TL;DR.</h2>
<p>Using system images allows for efficient computer management by separating system files from personal data, ensuring easy recovery and enhancing productivity. Regularly updating and creating production images facilitates quick restoration and experimentation with new technologies, saving time and providing peace of mind.</p>
<blockquote>
<p><strong>Attributions:</strong></p>
<p><strong><em>None ↗.</em></strong></p>
</blockquote>
<hr />
<h2 id="heading-an-introduction">An Introduction.</h2>
<p>Using system images allows for efficient computer management by separating system files from personal data:</p>
<blockquote>
<p>The purpose of this post is to justify the creation, and use, of operating system images.</p>
</blockquote>
<hr />
<h2 id="heading-the-big-picture">The Big Picture.</h2>
<p>Creating, and using, operating system images offers a range of features and benefits that enhance computer management and productivity:</p>
<ol>
<li><p><strong>Efficient System Management</strong>: System images allows me to capture the entire state of an operating system, including installed applications, system settings, and configurations. This makes it easy to manage and maintain systems, especially in environments with multiple computers.</p>
</li>
<li><p><strong>Quick Recovery and Restoration</strong>: In the event of system failure, corruption, or malware attack, a system image can be used to quickly restore my system to a previous, stable state. This minimises downtime and ensures business continuity.</p>
</li>
<li><p><strong>Separation of System and Data</strong>: By keeping system files separate from personal data, I can ensure that my important information is safe and easily recoverable. This separation also simplifies the process of updating or upgrading the operating system without affecting my personal files.</p>
</li>
<li><p><strong>Experimentation and Testing</strong>: System images provide a safe environment for testing new software, updates, or configurations. I can experiment with new technologies without the risk of permanently altering my primary system setup.</p>
</li>
<li><p><strong>Time-Saving</strong>: Regularly updating and creating system images reduces the time required for system setup and configuration. When a system needs to be reinstalled, the image can be deployed quickly, saving time compared to manual installation and configuration.</p>
</li>
<li><p><strong>Consistency Across Systems</strong>: In organisational settings, system images ensure consistency across multiple computers. This is particularly useful for IT departments that need to deploy standardised environments across a network of machines.</p>
</li>
<li><p><strong>Peace of Mind</strong>: Knowing that a reliable backup of the system exists provides peace of mind. I can be confident that I can recover my system to a working state with minimal effort.</p>
</li>
<li><p><strong>Cost-Effective</strong>: By reducing the need for technical support and minimising downtime, system images can lead to cost savings for both individuals and organisations.</p>
</li>
</ol>
<p>Overall, the use of operating system images is a practical approach to managing my computer systems, enhancing my productivity, and ensuring my data is security separated from my operating system.</p>
<hr />
<h2 id="heading-prerequisites">Prerequisites.</h2>
<ul>
<li>A Debian-based Linux distro (I use Ubuntu).</li>
</ul>
<hr />
<h2 id="heading-updating-my-base-system">Updating my Base System.</h2>
<ul>
<li>From the (base) terminal, I update my (base) system:</li>
</ul>
<pre><code class="lang-python">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<blockquote>
<p>NOTE: The Ollama LLM manager is already installed on my (base) system.</p>
</blockquote>
<hr />
<h2 id="heading-what-is-clonezilla-live">What is CloneZilla Live?</h2>
<p><a target="_blank" href="https://clonezilla.org/liveusb.php">Clonezilla Live</a> is a small, bootable GNU/Linux distribution that can be used to image, or “clone”, individual computers using an (obsolete) CD/DVD or USB flash drive.</p>
<hr />
<h2 id="heading-why-i-can-re-image-my-system">Why I Can Re-Image My System.</h2>
<p>I have always separated my system from my data. Over the years, I have saved my data to:</p>
<ul>
<li><p>5¼” floppy diskettes,</p>
</li>
<li><p>3½” floppy diskettes,</p>
</li>
<li><p>External serial Zip Drives,</p>
</li>
<li><p>External USB SATA HDDs,</p>
</li>
<li><p>External USB Enclosures with SATA SSDs, and</p>
</li>
<li><p>External USB Enclosures with M.2 NVMe Drives.</p>
</li>
</ul>
<blockquote>
<p>NOTE: I was very happy to get my purplish-blue 750MB Zip drive from Iomega. And then I found out they also had the 1TB Jaz drive. <em>Oh, well.</em> I used the Zip drive for years, even when I started using external USB drives. It finally died in the early 2010s.</p>
</blockquote>
<p>At the moment, I use a 4-bay NAS with RAID-5 redundancy that has an external 5TB USB HDD that is used for backups.</p>
<p>Saving my data to the NAS means I can restore a previously saved system image to my PC. For decades, I have very rarely worried about loosing anything important.</p>
<p>Last year, I had an HDD fail on me (and I still have a sour taste about the incident). I checked the drive and, luckily for me, the data could be retrieved. So I set that drive aside (fully intending to recover the data), used another HDD, and started saving up for a 4-bay NAS and 4 × 6TB HDDs. Now that I have a NAS, I’m starting to think about getting another one for replication. It couldn’t hurt.</p>
<p>Also, I never recovered the data from the failed HDD. I keep prioritising work over recovery. The busted drive has been sitting on my PC for over a year, quietly gathering dust, and now I can barely remember what it contained.</p>
<blockquote>
<p>NOTE: I have now added the failed drive to a pile of other HDDs that are scheduled for physical destruction… some day.</p>
</blockquote>
<hr />
<h2 id="heading-updating-my-fresh-system-image">Updating My Fresh System Image.</h2>
<p>Every few months, I will:</p>
<ul>
<li><p>Install my fresh system image,</p>
</li>
<li><p>Apply all of the updates, and</p>
</li>
<li><p>Replace the old system image.</p>
</li>
</ul>
<p>This update simply reduces the system update time whenever I use the system image.</p>
<hr />
<h2 id="heading-fresh-image-vs-production-image">Fresh Image vs. Production Image.</h2>
<p>Often, I will load my fresh system image, add a few apps or utilities, and save it as a production image.</p>
<p>The purpose of production images is to have specific apps and utilities ready to use as soon as I restore a production image. I will even create a production image that contains <em>everything.</em></p>
<p>Sometimes, I will create an image of my current setup, switch to a different image, do what I needed to achieve, and then restore my current setup.</p>
<p>The reason for having multiple images is education. Technology moves fast, especially AI technology, so using an image where I can run experiments allows me to explore how new tech can be integrated into my existing workflow.</p>
<hr />
<h2 id="heading-replacing-my-pc">Replacing My PC.</h2>
<p>My PC died in 2019. I had to save up for a replacement and I got a new PC with an AMD Ryzen 5 2600 CPU. It has 6 cores and supports 12 threads. Over the years, I have:</p>
<ul>
<li><p>Upgraded the RAM to its maximum of 64GB,</p>
</li>
<li><p>Installed a 1TB M.2 NVMe drive for Ubuntu,</p>
</li>
<li><p>Installed a 512GB SATA SSD for Windows,</p>
</li>
<li><p>Installed a 256GB SATA SSD for storage, and</p>
</li>
<li><p>Installed a 12GB VRAM Nvidia RTX-3060 graphics card.</p>
</li>
</ul>
<p>Switching to a different PC made no difference to my data. Everything important was saved to an external HDD (until it died 5-years later).</p>
<p>Even with my old - and long expired - PC, I was using system images. I even used a similar product to CloneZilla Live when I was still using Windows as my daily driver and primary system (before switching to Linux distros in 2015). Creating, and using, system images just feels like the right process for me.</p>
<hr />
<h2 id="heading-the-results">The Results.</h2>
<p>Using system images is a practical and efficient way to manage and maintain my computer systems. By separating system files from personal data, I can ensure that my important information is always safe and easily recoverable. Regularly creating production images allows for quick restoration and experimentation with new technologies. This approach not only saves time but also provides peace of mind, knowing that my system can be restored to a working state with minimal effort. Embracing system images as part of my workflow significantly enhances my productivity and adaptability in the ever-evolving tech landscape.</p>
<hr />
<h2 id="heading-in-conclusion">In Conclusion.</h2>
<p>Discover the benefits of using system images for efficient computer management. Learn how separating system files from personal data ensures easy recovery and enhances productivity. Explore practical tips for creating and updating system images to stay adaptable in the fast-paced tech world.</p>
<p>Until next time: Be safe, be kind, be awesome.</p>
<hr />
<h2 id="heading-hash-tags">Hash Tags.</h2>
<p>#SystemImages #ComputerManagement #DataRecovery #Productivity #TechTips #CloneZilla #DataBackup #TechAdaptability #SystemBackup #EfficientComputing</p>
]]></content:encoded></item><item><title><![CDATA[A Look into the BMAD Method.]]></title><description><![CDATA[TL;DR.
This post provides an overview of how the BMAD Method works, identifies key markdown files and their purpose, explains how these files can be changed to meet my requirements, details the impact of making specific changes to these files, and wa...]]></description><link>https://solodev.app/a-look-into-the-bmad-method</link><guid isPermaLink="true">https://solodev.app/a-look-into-the-bmad-method</guid><category><![CDATA[BMADMethod]]></category><category><![CDATA[AIDrivenDevelopment]]></category><category><![CDATA[CustomizableAI]]></category><category><![CDATA[AIandAgile]]></category><category><![CDATA[DevelopmentEfficiency]]></category><category><![CDATA[agile development]]></category><category><![CDATA[software development]]></category><category><![CDATA[ai integration]]></category><category><![CDATA[Tech Innovation,]]></category><category><![CDATA[development workflow]]></category><category><![CDATA[AI Framework]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[Agile Methodologies]]></category><category><![CDATA[ #AIInSoftware]]></category><category><![CDATA[ #TechBestPractices]]></category><dc:creator><![CDATA[Brian King]]></dc:creator><pubDate>Wed, 30 Jul 2025 11:20:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1753865265397/362312fb-8b6f-4a9d-adac-f37664c52de1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-tldr">TL;DR.</h2>
<p>This post provides an overview of how the BMAD Method works, identifies key markdown files and their purpose, explains how these files can be changed to meet my requirements, details the impact of making specific changes to these files, and walks through installing, and systematically applying, the BMAD Method to produce a comprehensive suite of documents (from high-level requirements to executable code and user-facing guides, all driven by intelligent AI agents).</p>
<blockquote>
<p><strong>Attributions:</strong></p>
<p><a target="_blank" href="https://github.com/bmadcode/BMAD-METHOD"><strong><em>The BMAD GitHub Repository</em></strong></a> <strong><em>↗.</em></strong></p>
</blockquote>
<hr />
<h2 id="heading-an-introduction">An Introduction.</h2>
<p>Using AI for software development is a constantly evolving process. The BMAD method is yet another approach to keeping AI models focused on their programming tasks:</p>
<blockquote>
<p>The purpose of this post is to introduce the BMAD Method for AI-based software development.</p>
</blockquote>
<hr />
<h2 id="heading-the-big-picture">The Big Picture.</h2>
<p>The Breakthrough Method of Agile AI-Driven Development (BMAD Method) is a powerful natural language framework designed to enhance software development and other domains through specialized AI agents. This guide provides an overview of how the BMAD Method works, its key components, and how I can customize it to meet my specific requirements.</p>
<hr />
<h2 id="heading-prerequisites">Prerequisites.</h2>
<ul>
<li>A Debian-based Linux distro (I use Ubuntu).</li>
</ul>
<hr />
<h2 id="heading-updating-my-base-system">Updating my Base System.</h2>
<ul>
<li>From the (base) terminal, I update my (base) system:</li>
</ul>
<pre><code class="lang-python">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<blockquote>
<p>NOTE: The Ollama LLM manager is already installed on my (base) system.</p>
</blockquote>
<hr />
<h2 id="heading-what-is-the-bmad-method">What is the BMAD Method?</h2>
<p>The BMAD Method, or Breakthrough Method for Agile AI-Driven Development, is a framework that integrates AI with agile methodologies to streamline software development. It utilizes specialized AI agents to manage various roles in the development process, enhancing efficiency and organization.</p>
<hr />
<h2 id="heading-1-overview-of-the-bmad-method">1. Overview of the BMAD Method.</h2>
<p>The BMAD Method addresses common challenges in AI-assisted development, such as planning inconsistency and context loss, through two core innovations:</p>
<h3 id="heading-11-agentic-planning">1.1. Agentic Planning.</h3>
<p>In this initial phase, dedicated AI agents like the Analyst, Product Manager (PM), and Architect collaborate to create highly detailed and consistent Product Requirement Documents (PRDs) and Architecture documents. This planning can optionally be performed using web-based agents for cost efficiency, leveraging powerful models and human-in-the-loop refinement to produce comprehensive specifications.</p>
<h3 id="heading-12-context-engineered-development">1.2. Context-Engineered Development.</h3>
<p>Following the planning phase, a Scrum Master (SM) agent takes the detailed plans and transforms them into hyper-detailed development stories. These story files are crucial as they embed the full context, implementation details, and architectural guidance directly within them. The Development (Dev) agent then uses these self-contained story files to implement the actual code, ensuring complete understanding of what, how, and why to build.</p>
<p>This two-phase approach ensures that development proceeds with a clear, consistent vision, minimizing misinterpretations and maximizing efficiency.</p>
<hr />
<h2 id="heading-2-the-bmad-workflow">2. The BMAD Workflow.</h2>
<p>The BMAD Method follows a structured workflow, typically starting with planning and moving into a continuous development cycle.</p>
<h3 id="heading-21-the-planning-workflow-web-ui-or-powerful-ide-agents">2.1. The Planning Workflow (Web UI or Powerful IDE Agents).</h3>
<p>The planning phase focuses on defining the project scope, requirements, and technical architecture.</p>
<ol>
<li><p><strong>Project Idea &amp; Research:</strong> Starts with a project idea, optionally involving an Analyst for brainstorming, market research, and competitor analysis to create a Project Brief.</p>
</li>
<li><p><strong>PRD Creation:</strong> A PM agent creates the PRD (Product Requirement Document) from the Project Brief, detailing Functional Requirements (FRs), Non-Functional Requirements (NFRs), Epics, and Stories.</p>
</li>
<li><p><strong>UX Specification (Optional):</strong> If User Experience (UX) is required, a UX Expert agent creates a Front End Specification and can generate UI prompts.</p>
</li>
<li><p><strong>Architecture Design:</strong> An Architect agent designs the system architecture based on the PRD and optional UX specifications.</p>
</li>
<li><p><strong>Document Alignment:</strong> A Product Owner (PO) agent runs a Master Checklist to ensure all documents (PRD, Architecture, UX Spec) are aligned. If not, the PO updates Epics and Stories, and the PRD/Architecture are revised.</p>
</li>
<li><p><strong>Transition to IDE:</strong> Once planning is complete and documents are aligned, the process transitions from web UI (if used) to the IDE. The PO shards the PRD and Architecture documents, preparing them for the development cycle.</p>
</li>
</ol>
<h3 id="heading-22-the-core-development-cycle-ide">2.2. The Core Development Cycle (IDE).</h3>
<p>This phase focuses on implementing the planned features and ensuring quality.</p>
<ol>
<li><p><strong>Story Drafting:</strong> The SM agent reviews previous development/QA notes and drafts the next story from the sharded Epic and Architecture documents.</p>
</li>
<li><p><strong>Story Review (Optional):</strong> A QA agent can optionally review the story draft against existing artifacts.</p>
</li>
<li><p><strong>User Approval:</strong> The user approves the story. If changes are needed, the SM revises the story.</p>
</li>
<li><p><strong>Development &amp; Testing:</strong> The Dev agent executes tasks sequentially, implements code and tests, and runs all validations.</p>
</li>
<li><p><strong>Ready for Review:</strong> The Dev agent marks the story as "Ready for Review" and adds notes.</p>
</li>
<li><p><strong>User Verification &amp; QA Review:</strong> The user verifies the work. They can approve without QA (with a critical reminder to verify all regression tests and linting) or request a QA review.</p>
</li>
<li><p><strong>QA Review &amp; Refactoring:</strong> If requested, the QA agent performs a senior developer review, refactors code, adds tests, and documents notes.</p>
</li>
<li><p><strong>QA Decision:</strong> QA decides if more Dev work is needed or if the story is approved.</p>
</li>
<li><p><strong>Commit Changes:</strong> Crucially, all changes are committed before proceeding.</p>
</li>
<li><p><strong>Story Completion:</strong> The story is marked as "Done," and the cycle repeats for the next story.</p>
</li>
</ol>
<hr />
<h2 id="heading-3-key-components-and-their-customization">3. Key Components and Their Customization.</h2>
<p>The BMAD Method's flexibility comes from its natural language-based components, primarily Markdown and YAML files, which define agent behaviors, workflows, and content.</p>
<h3 id="heading-31-agent-definition-files-bmad-coreagentsmd">3.1. Agent Definition Files (<code>bmad-core/agents/*.md</code>).</h3>
<ul>
<li><p><strong>Purpose:</strong> Each Markdown file in <code>bmad-core/agents/</code> defines a specific AI agent's persona, role, style, identity, focus, and core principles. They also list the commands the agent can execute and its dependencies (tasks, templates, checklists, data). The <code>customization</code> field allows for specific overrides that take precedence over other instructions.</p>
</li>
<li><p><strong>How to Change:</strong></p>
<ul>
<li><p><strong>Refine Persona:</strong> Modify the <code>persona</code> sections (role, style, identity, focus) to fine-tune how an agent "thinks" and communicates. For example, making a "Dev" agent's style "extremely concise" will result in shorter, more direct responses.</p>
</li>
<li><p><strong>Adjust Core Principles:</strong> Add, remove, or modify bullet points under <code>core_principles</code> to instill specific guidelines or values. For instance, adding "Prioritize secure coding practices" to a <code>dev.md</code> agent's principles would make it focus more on security during implementation.</p>
</li>
<li><p><strong>Update Commands &amp; Dependencies:</strong> Add new commands or modify existing ones, ensuring they map to relevant tasks or templates. Update the <code>dependencies</code> section to include or exclude files the agent needs to access.</p>
</li>
<li><p><strong>Apply Customization Overrides:</strong> Use the <code>agent.customization</code> field for powerful, explicit overrides that ensure specific behaviors or rules are always followed, even if they conflict with other instructions.</p>
</li>
</ul>
</li>
<li><p><strong>Impact of Changes:</strong> Directly alters the agent's behavior, capabilities, and output. A modified agent will approach tasks differently, potentially leading to changes in the generated code, documentation, or planning artifacts.</p>
</li>
</ul>
<h3 id="heading-32-checklists-bmad-corechecklistsmd">3.2. Checklists (<code>bmad-core/checklists/*.md</code>).</h3>
<ul>
<li><p><strong>Purpose:</strong> These Markdown files provide structured lists of criteria or step-by-step procedures that agents (or users) must follow to ensure quality, completeness, or adherence to standards. They often include <code>[[LLM: INSTRUCTIONS]]</code> for the agent on how to process the checklist.</p>
</li>
<li><p><strong>How to Change:</strong></p>
<ul>
<li><p><strong>Add/Remove/Modify Items:</strong> Directly edit the markdown list items to add new checks, remove irrelevant ones, or refine existing criteria.</p>
</li>
<li><p><strong>Update LLM Instructions:</strong> Modify the <code>[[LLM: ...]]</code> blocks to guide the agent on how to interpret and apply the checklist items, or what kind of output to generate (e.g., "Be specific - list each requirement and whether it's complete").</p>
</li>
</ul>
</li>
<li><p><strong>Impact of Changes:</strong> Directly impacts the quality and completeness of deliverables. A more rigorous <code>story-dod-checklist.md</code> will lead to higher quality story implementations. It also influences how agents report on their progress and adherence to standards.</p>
</li>
</ul>
<h3 id="heading-33-technical-preferences-bmad-coredatatechnical-preferencesmd">3.3. Technical Preferences (<code>bmad-core/data/technical-preferences.md</code>).</h3>
<ul>
<li><p><strong>Purpose:</strong> This Markdown file allows users to inject their preferred technologies, design patterns, or other technical biases into the planning agents (e.g., PM, Architect).</p>
</li>
<li><p><strong>How to Change:</strong> Add markdown content describing my preferences. For example, I can specify preferred frontend frameworks, backend languages, database types, or architectural styles.</p>
</li>
<li><p><strong>Impact of Changes:</strong> Planning agents will consider these preferences when generating PRDs, architecture documents, and making technology recommendations, leading to plans more aligned with my specific technical stack and philosophy.</p>
</li>
</ul>
<h3 id="heading-34-core-configuration-bmad-corecore-configyaml">3.4. Core Configuration (<code>bmad-core/core-config.yaml</code>).</h3>
<ul>
<li><p><strong>Purpose:</strong> This YAML file contains critical configurations for the BMAD Method, such as the <code>devLoadAlwaysFiles</code> list.</p>
</li>
<li><p><strong>How to Change:</strong> Modify the <code>devLoadAlwaysFiles</code> list to specify which documents (e.g., coding standards, tech stack, project structure) the Dev agent should always load into its context.</p>
</li>
<li><p><strong>Impact of Changes:</strong> Crucial for managing the Dev agent's context. By including lean, focused documents here, I ensure the Dev agent has essential guidelines without unnecessary context bloat, leading to more efficient and compliant code generation.</p>
</li>
</ul>
<h3 id="heading-35-tasks-bmad-coretasksmd">3.5. Tasks (<code>bmad-core/tasks/*.md</code>).</h3>
<ul>
<li><p><strong>Purpose:</strong> These Markdown files define step-by-step procedures that an agent follows to complete a specific piece of work. They are the "how-to" guides for agents.</p>
</li>
<li><p><strong>How to Change:</strong> Modify the instructions within a task file to alter the steps an agent takes to perform a function. For example, changing <code>create-doc.md</code> would change how documents are generally created by agents.</p>
</li>
<li><p><strong>Impact of Changes:</strong> Directly affects the execution flow and output of specific agent commands.</p>
</li>
</ul>
<h3 id="heading-36-templates-bmad-coretemplatesyaml">3.6. Templates (<code>bmad-core/templates/*.yaml</code>).</h3>
<ul>
<li><p><strong>Purpose:</strong> These YAML files define the structured output format for documents generated by agents (e.g., PRD, Architecture, Story). They include metadata, workflow configuration, and sections with instructions for content generation.</p>
</li>
<li><p><strong>How to Change:</strong> Modify the YAML structure to define new sections, change existing section titles, or update the <code>instruction</code> fields that guide the LLM on what content to generate for each section.</p>
</li>
<li><p><strong>Impact of Changes:</strong> Determines the structure and content of the documents produced by agents, ensuring consistency and adherence to desired formats.</p>
</li>
</ul>
<hr />
<h2 id="heading-4-best-practices-for-using-and-customizing-the-bmad-method">4. Best Practices for Using and Customizing the BMAD Method.</h2>
<ul>
<li><p><strong>Keep Dev Agents Lean:</strong> Adhere to the principle of minimal context for development agents. Only load essential files (<code>devLoadAlwaysFiles</code>) to maximize coding efficiency.</p>
</li>
<li><p><strong>Leverage Natural Language:</strong> Since everything is Markdown and natural language, focus on clear, concise, and unambiguous instructions in agent definitions, tasks, and templates.</p>
</li>
<li><p><strong>Use Expansion Packs for Specialization:</strong> For domain-specific needs (e.g., game development, DevOps) or non-technical applications, create or use expansion packs to avoid bloating the core agents.</p>
</li>
<li><p><strong>Iterative Refinement:</strong> Continuously refine agent definitions, checklists, and preferences based on the quality of the AI-generated output and my evolving project needs.</p>
</li>
<li><p><strong>Commit Regularly:</strong> Especially during the development cycle, commit my changes frequently to maintain version control and track progress.</p>
</li>
<li><p><strong>Understand the Workflows:</strong> Familiarize myself with both the Planning and Development workflows to effectively guide the agents and intervene when necessary.</p>
</li>
</ul>
<p>By understanding these core components and their interplay, I can effectively leverage and customize the BMAD Method to streamline my AI-assisted development processes and achieve higher quality outcomes.</p>
<hr />
<h2 id="heading-5-practically-applying-the-bmad-method">5. Practically Applying the BMAD Method.</h2>
<p>To effectively utilize the BMAD Method, I'll need to set up my local environment and understand the practical steps for generating documents. This section guides me through the installation, configuration, and a step-by-step process for creating a complete set of project documents.</p>
<h3 id="heading-51-installation-and-configuration">5.1. Installation and Configuration.</h3>
<p>The BMAD Method is primarily built around Python scripts and Markdown/YAML files, leveraging AI models for content generation.</p>
<h4 id="heading-511-prerequisites">5.1.1. Prerequisites.</h4>
<p>Before I begin, ensure I have the following installed on my local machine:</p>
<ul>
<li><p><strong>Python 3.9+:</strong> The core of the BMAD Method relies on Python.</p>
</li>
<li><p><strong>pip:</strong> Python's package installer, usually included with Python.</p>
</li>
<li><p><strong>Git:</strong> For cloning the BMAD Method repository.</p>
</li>
<li><p><strong>An IDE (e.g., VS Code):</strong> Recommended for editing Markdown and YAML files, and running scripts.</p>
</li>
<li><p><strong>Access to an LLM API:</strong> The BMAD Method requires access to a Large Language Model (LLM) API (e.g., OpenAI, Anthropic, Google Gemini). I will need an API key for my chosen LLM provider.</p>
</li>
</ul>
<h4 id="heading-512-setup-steps">5.1.2. Setup Steps.</h4>
<ol>
<li><p><strong>Clone the Repository:</strong> Start by cloning the BMAD Method repository to my local machine:</p>
<pre><code class="lang-bash"> git <span class="hljs-built_in">clone</span> https://github.com/bmadcode/BMAD-METHOD.git
 <span class="hljs-built_in">cd</span> BMAD-METHOD
</code></pre>
</li>
<li><p><strong>Create a Virtual Environment (Recommended):</strong> It's good practice to create a virtual environment to manage dependencies:</p>
<pre><code class="lang-bash"> python -m venv venv
 <span class="hljs-built_in">source</span> venv/bin/activate  <span class="hljs-comment"># On Windows, use `venv\Scripts\activate`</span>
</code></pre>
</li>
<li><p><strong>Install Dependencies:</strong> Install the necessary Python packages. A <code>requirements.txt</code> file is typically provided in the repository.</p>
<pre><code class="lang-bash"> pip install -r requirements.txt
</code></pre>
<p> (If <code>requirements.txt</code> is not present, I would typically install <code>langchain</code>, <code>openai</code> (or equivalent for my LLM), <code>pyyaml</code>, etc., manually.)</p>
</li>
<li><p><strong>Configure LLM API Key:</strong> The BMAD Method will need my LLM API key to interact with the AI models. This is usually done by setting an environment variable. For example, for OpenAI:</p>
<pre><code class="lang-bash"> <span class="hljs-built_in">export</span> OPENAI_API_KEY=<span class="hljs-string">"my_openai_api_key_here"</span>
</code></pre>
<p> Replace <code>"my_openai_api_key_here"</code> with my actual API key. For other LLMs, consult their documentation for the appropriate environment variable name.</p>
</li>
<li><p><strong>Review</strong> <code>bmad-core/core-config.yaml</code>: Familiarize myself with the <code>bmad-core/core-config.yaml</code> file. This file contains crucial configurations, such as <code>devLoadAlwaysFiles</code>, which dictates what documents the Development agent always loads. Adjust this as needed for my projects’ specific standards.</p>
</li>
</ol>
<h3 id="heading-52-step-by-step-document-generation">5.2. Step-by-Step Document Generation.</h3>
<p>The BMAD Method facilitates the creation of a comprehensive set of documents through a guided, agent-driven process. This section outlines a typical flow, starting with high-level planning documents and progressing to detailed development artifacts.</p>
<h4 id="heading-521-phase-1-planning-documents-prd-and-architecture">5.2.1. Phase 1: Planning Documents (PRD and Architecture).</h4>
<p>This phase leverages the planning agents to define the project's scope and technical foundation.</p>
<ol>
<li><p><strong>Initiate Project Idea &amp; Research (Analyst Agent):</strong></p>
<ul>
<li><p>Start with a clear project idea.</p>
</li>
<li><p>Optionally, use the Analyst agent to conduct initial brainstorming, market research, and competitor analysis. This helps in creating a foundational "Project Brief" document.</p>
</li>
<li><p><em>Command Example (conceptual):</em> <code>python bmad.py analyst --task "research_project_idea" --output "ProjectBrief.md"</code></p>
</li>
</ul>
</li>
<li><p><strong>Generate Product Requirement Document (PRD) (Product Manager Agent):</strong></p>
<ul>
<li><p>The Product Manager (PM) agent takes the Project Brief (or my initial idea) and generates a detailed PRD. This document will outline Functional Requirements (FRs), Non-Functional Requirements (NFRs), Epics, and high-level Stories.</p>
</li>
<li><p><em>Command Example (conceptual):</em> <code>python bmad.py pm --task "create_prd" --input "ProjectBrief.md" --output "PRD.md"</code></p>
</li>
</ul>
</li>
<li><p><strong>Design Architecture Document (Architect Agent):</strong></p>
<ul>
<li><p>Based on the generated PRD, the Architect agent designs the system architecture. This document will detail the technical stack, system components, data flow, and overall structural design.</p>
</li>
<li><p><em>Command Example (conceptual):</em> <code>python bmad.py architect --task "design_architecture" --input "PRD.md" --output "Architecture.md"</code></p>
</li>
</ul>
</li>
<li><p><strong>Document Alignment (Product Owner Agent - Optional but Recommended):</strong></p>
<ul>
<li><p>The Product Owner (PO) agent can run a Master Checklist to ensure consistency and alignment between the PRD and Architecture documents. If discrepancies are found, the PO can guide revisions.</p>
</li>
<li><p><em>Command Example (conceptual):</em> <code>python bmad.py po --task "align_documents" --input "PRD.md,Architecture.md"</code></p>
</li>
</ul>
</li>
</ol>
<h4 id="heading-522-phase-2-detailed-development-documents-stories-and-beyond">5.2.2. Phase 2: Detailed Development Documents (Stories and Beyond).</h4>
<p>Once the core planning documents are stable, the focus shifts to creating detailed development artifacts that can directly drive AI-assisted coding.</p>
<ol>
<li><p><strong>Shard Planning Documents (Product Owner Agent):</strong></p>
<ul>
<li><p>The PO agent "shards" the comprehensive PRD and Architecture documents into smaller, manageable pieces. This prepares them for the Scrum Master to draft individual stories.</p>
</li>
<li><p><em>Command Example (conceptual):</em> <code>python bmad.py po --task "shard_documents" --input "PRD.md,Architecture.md" --output-dir "sharded_docs/"</code></p>
</li>
</ul>
</li>
<li><p><strong>Draft Development Stories (Scrum Master Agent):</strong></p>
<ul>
<li><p>The Scrum Master (SM) agent reviews the sharded documents and drafts individual development stories. Each story is a self-contained unit of work, embedding context, implementation details, and architectural guidance.</p>
</li>
<li><p><em>Command Example (conceptual):</em> <code>python bmad.py sm --task "draft_story" --input "sharded_docs/epic_1_part_1.md" --output "Story_FeatureX_01.md"</code></p>
</li>
<li><p>Repeat this step for each story I need to generate.</p>
</li>
</ul>
</li>
<li><p><strong>Generate Code and Tests (Development Agent):</strong></p>
<ul>
<li><p>The Development (Dev) agent takes a drafted story file and generates the actual code and associated tests. The story file provides all the necessary context, minimizing the need for external lookups.</p>
</li>
<li><p><em>Command Example (conceptual):</em> <code>python bmad.py dev --task "implement_story" --input "Story_FeatureX_01.md" --output-dir "src/feature_x/"</code></p>
</li>
</ul>
</li>
<li><p><strong>Quality Assurance and Refactoring (QA Agent - Optional):</strong></p>
<ul>
<li><p>A QA agent can review the generated code and tests against defined quality standards, refactor code, add more tests, and document any findings.</p>
</li>
<li><p><em>Command Example (conceptual):</em> <code>python bmad.py qa --task "review_code" --input "src/feature_x/" --story "Story_FeatureX_01.md"</code></p>
</li>
</ul>
</li>
<li><p><strong>Generate User Guides, API Documentation, etc.:</strong></p>
<ul>
<li><p>Once the core application is developed, I can use specialized agents (or adapt existing ones) to generate further documentation:</p>
<ul>
<li><p><strong>User Guides:</strong> An agent could take the PRD and implemented features to create user-facing documentation.</p>
</li>
<li><p><strong>API Documentation:</strong> An agent could analyze the generated code to produce API specifications (e.g., OpenAPI/Swagger).</p>
</li>
<li><p><strong>Installation Guides:</strong> An agent could synthesize information from the architecture and development process to create detailed installation instructions.</p>
</li>
</ul>
</li>
<li><p>These would typically involve custom tasks and templates tailored for each document type.</p>
</li>
</ul>
</li>
</ol>
<p>By following these steps, I can systematically apply the BMAD Method to produce a comprehensive suite of documents, from high-level requirements to executable code and user-facing guides, all driven by intelligent AI agents.</p>
<hr />
<h2 id="heading-6-agent-names">6. Agent Names.</h2>
<p>The BMAD Method utilizes a set of specialized AI agents, each with a distinct role and purpose within the development workflow. Understanding these agents is key to effectively leveraging the method.</p>
<ul>
<li><p><strong>Analyst Agent:</strong></p>
<ul>
<li><strong>Purpose:</strong> Involved in the initial project idea and research phase. Responsible for brainstorming, market research, and competitor analysis to help create a foundational Project Brief.</li>
</ul>
</li>
<li><p><strong>Product Manager (PM) Agent:</strong></p>
<ul>
<li><strong>Purpose:</strong> Creates the Product Requirement Document (PRD) from the Project Brief. This document details Functional Requirements (FRs), Non-Functional Requirements (NFRs), Epics, and Stories.</li>
</ul>
</li>
<li><p><strong>Architect Agent:</strong></p>
<ul>
<li><strong>Purpose:</strong> Designs the system architecture based on the PRD and optional UX specifications. This includes defining the technical stack, system components, data flow, and overall structural design.</li>
</ul>
</li>
<li><p><strong>UX Expert Agent (Optional):</strong></p>
<ul>
<li><strong>Purpose:</strong> If User Experience (UX) is required, this agent creates a Front End Specification and can generate UI prompts.</li>
</ul>
</li>
<li><p><strong>Product Owner (PO) Agent:</strong></p>
<ul>
<li><strong>Purpose:</strong> Ensures all planning documents (PRD, Architecture, UX Spec) are aligned by running a Master Checklist. If not aligned, the PO updates Epics and Stories and guides revisions. Also responsible for sharding the PRD and Architecture documents for the development cycle.</li>
</ul>
</li>
<li><p><strong>Scrum Master (SM) Agent:</strong></p>
<ul>
<li><strong>Purpose:</strong> Reviews previous development/QA notes and drafts the next development story from the sharded Epic and Architecture documents. Ensures stories are self-contained units of work.</li>
</ul>
</li>
<li><p><strong>Development (Dev) Agent:</strong></p>
<ul>
<li><strong>Purpose:</strong> Executes tasks sequentially, implements code and tests based on the detailed story files, and runs all validations. Marks stories as "Ready for Review" upon completion.</li>
</ul>
</li>
<li><p><strong>QA Agent:</strong></p>
<ul>
<li><strong>Purpose:</strong> Optionally reviews story drafts against existing artefacts. In the core development cycle, performs a senior developer review, refactors code, adds tests, and documents notes. Decides if more Dev work is needed or if the story is approved.</li>
</ul>
</li>
</ul>
<hr />
<h2 id="heading-the-results">The Results.</h2>
<p>I have explored the BMAD Method, a revolutionary framework integrating AI with Agile methodologies to enhance software development. I learned about its key components, customization options, and best practices to streamline my AI-assisted development processes for higher quality outcomes.</p>
<hr />
<h2 id="heading-in-conclusion">In Conclusion.</h2>
<p>The BMAD Method offers a transformative approach to software development by integrating AI with Agile methodologies. By understanding and customizing its key components, such as agent definitions, checklists, and templates, I can streamline my processes and achieve higher quality outcomes. The method's flexibility and emphasis on natural language make it accessible and adaptable to various project needs, ensuring that AI-assisted development is both efficient and effective. As I continue to explore and implement the BMAD Method, I must remember to iterate and refine my approach to maximize its benefits.</p>
<p>Until next time: Be safe, be kind, be awesome.</p>
<hr />
<h2 id="heading-hash-tags">Hash Tags.</h2>
<p>#BMADMethod #AIDrivenDevelopment #AgileDevelopment #SoftwareDevelopment</p>
<p>#AIIntegration #TechInnovation #DevelopmentWorkflow #AIFramework</p>
<p>#CustomizableAI #SoftwareEngineering #AgileMethodologies #AIInSoftware</p>
<p>#TechBestPractices #AIandAgile #DevelopmentEfficiency</p>
]]></content:encoded></item><item><title><![CDATA[Installing Ubuntu Server on M.2 Drives for Pi5 SBCs.]]></title><description><![CDATA[Updated: Tuesday 14th October 2025
TL;DR.
This post is a guide to upgrading Pi5 SBCs (Raspberry Pi 5 Single Board Computers) from using microSD cards to employing M.2 NVMe drives.

Attributions:
None ↗.


An Introduction.
Upgrading from using a micro...]]></description><link>https://solodev.app/installing-ubuntu-server-on-m2-drives-for-pi5-sbcs</link><guid isPermaLink="true">https://solodev.app/installing-ubuntu-server-on-m2-drives-for-pi5-sbcs</guid><category><![CDATA[M2Drive]]></category><category><![CDATA[ServerInstallation]]></category><category><![CDATA[TechUpgrade]]></category><category><![CDATA[Raspberry Pi]]></category><category><![CDATA[Pi5]]></category><category><![CDATA[singleboardcomputer]]></category><category><![CDATA[UbuntuServer]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[#HighAvailability ]]></category><category><![CDATA[Homelab]]></category><category><![CDATA[StorageSolutions]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[DIYTech]]></category><category><![CDATA[ #TechTutorial ]]></category><category><![CDATA[ #TechEnthusiast]]></category><dc:creator><![CDATA[Brian King]]></dc:creator><pubDate>Thu, 19 Jun 2025 10:00:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750249546892/f01819d9-ac0f-40c9-a543-8a65412b6521.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Updated: Tuesday 14th October 2025</p>
<h2 id="heading-tldr">TL;DR.</h2>
<p>This post is a guide to upgrading Pi5 SBCs (Raspberry Pi 5 Single Board Computers) from using microSD cards to employing M.2 NVMe drives.</p>
<blockquote>
<p><strong>Attributions:</strong></p>
<p><strong><em>None ↗.</em></strong></p>
</blockquote>
<hr />
<h2 id="heading-an-introduction">An Introduction.</h2>
<p>Upgrading from using a microSD card on a Pi5 SBC to using an M.2 NVMe drive on the same device can really open up the storage capability of the Raspberry Pi platform:</p>
<blockquote>
<p>The purpose of this post is to demonstrate how to install an OS onto an M.2 NVMe drive for a Pi5 SBC.</p>
</blockquote>
<hr />
<h2 id="heading-the-big-picture">The Big Picture.</h2>
<p>There is a redundancy requirement when assembling an HA (High Availability) cluster. Many different flavours of Kubernetes (K8s, K3s, MicroK8s, K0s, and even Minikube) support HA clustering and I will use my Homelab hardware, such as it is, to practice the skills of deploying HA solutions. Using Pi5 SBCs as HA Master Nodes (sometimes called Control Planes) is fine due to the low resource requirements. Worker Nodes (sometimes called Data Planes), on the other hand, consumes many resources (CPU cycles, RAM, bandwidth, etc.) due to processing resource requests. Not only do Worker Nodes need to send a LOT of data, but they must also spin up more nodes when the demand increases. More nodes means more resources consumed.</p>
<p>For M.2 NVMe drives, there is a speed difference between the (older) SATA interface and the (newer) NVMe interface. The drives I use in this post are <a target="_blank" href="https://www.transcend-info.com/product/internal-ssd/mte400s">NVMe PCIe Gen3 x4</a> <strong><em>↗.</em></strong> The M.2 NVMe enclosures I use also support NVMe Drives. Also, the hats that are installed in the Pi5 SBCs are also NVMe compatible.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750298647369/f9bca2be-1ddf-4760-937d-8121403d925d.jpeg" alt class="image--center mx-auto" /></p>
<p>For now, my focus is on installing a Linux distro on M.2 NVMe drives that will be used by Pi5 SBCs.</p>
<hr />
<h2 id="heading-prerequisites">Prerequisites.</h2>
<ul>
<li><p>A Debian-based Linux distro (I use Ubuntu),</p>
</li>
<li><p>3 x Pi5 single board computers,</p>
</li>
<li><p>3 x Pi5 active coolers,</p>
</li>
<li><p>3 x M.2 NVMe drives,</p>
</li>
<li><p>3 x Pi5 NVMe hats,</p>
</li>
<li><p>An M.2 NVMe enclosure,</p>
</li>
<li><p>3 x CAT-5 Ethernet cables, and</p>
</li>
<li><p>Access to an Internet router, or a LAN switch, that is connected to the Internet router.</p>
</li>
</ul>
<hr />
<h2 id="heading-updating-my-base-system">Updating my Base System.</h2>
<ul>
<li>From the (base) terminal, I update my (base) system:</li>
</ul>
<pre><code class="lang-python">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<blockquote>
<p>NOTE: The Ollama LLM manager is already installed on my (base) system.</p>
</blockquote>
<hr />
<h2 id="heading-what-is-the-raspberry-pi-imager">What is the Raspberry Pi Imager?</h2>
<p>Raspberry Pi Imager is a user-friendly application that allows me to easily install Raspberry Pi OS, or any other compatible operating system, onto a microSD card for use with Raspberry Pi devices. It simplifies the process of selecting the OS and writing it to the card, ensuring that the installation is done correctly.</p>
<p>In my case, I will be installing Ubuntu Server LTS 24.04 onto an M.2 NVMe drive using an M.2 NVMe enclosure.</p>
<hr />
<h2 id="heading-installing-pi-imager">Installing Pi Imager.</h2>
<ul>
<li>From the terminal, I install the Raspberry Imager:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y rpi-imager
</code></pre>
<hr />
<h2 id="heading-creating-an-ubuntu-server-m2-nvme-drive">Creating an Ubuntu Server M.2 NVMe Drive.</h2>
<blockquote>
<p>NOTE: For this process, I use the Raspberry Imager software, 3 x M.2 NVMe drives, an M.2 NVMe enclosure, 3 x M.2 NVMe hats for Pi5 SBCs, and 3 x Pi5 SBCs.</p>
</blockquote>
<ul>
<li>I purchase three (3) M.2 NVMe drives, 1 for each Pi5, from my <a target="_blank" href="https://www.pbtech.co.nz/">local computer store</a>:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750237202424/f1a06ee3-f772-4aec-8edd-617e0502c884.jpeg" alt class="image--center mx-auto" /></p>
<ul>
<li>I install an M.2 NVMe drive into an M.2 NVMe enclosure:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750237554508/f5b10a96-6a71-4772-8eaa-12f8e4679ea2.jpeg" alt class="image--center mx-auto" /></p>
<ul>
<li>I connect the M.2 NVMe enclosure to my PC:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750237248190/5a859512-6a0b-4872-aedc-f5232ea0d709.jpeg" alt class="image--center mx-auto" /></p>
<ul>
<li>I start the Raspberry Pi Imager:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750237638020/1b09806f-86be-4a80-9115-6325022ef641.png" alt class="image--center mx-auto" /></p>
<ul>
<li>I select <code>Raspberry Pi 5</code> as my <code>Raspberry Pi Device</code>:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750237716571/eed81614-975b-4b6a-9899-0d52ebcaf01b.png" alt class="image--center mx-auto" /></p>
<ul>
<li>I select <code>Ubuntu Server 24.04.2 LTS (64-bit)</code> as the <code>Operating System</code>:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750237860148/d82dbc0b-a471-45cf-ae39-8220aaa6a899.png" alt class="image--center mx-auto" /></p>
<ul>
<li>I select the M.2 NVMe enclosure (<code>Realtek TS512GMTE440S</code>) as the <code>Storage</code>:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750238178870/85b94c7e-279b-4735-948f-3cce1d169969.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>NOTE: <code>TS512GMTE440S</code> is the product ID for the M.2 NVMe drive.</p>
</blockquote>
<ul>
<li>I click the <code>NEXT</code> button:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750238234718/6366d384-bcf4-406b-ba4f-109a33e9e868.png" alt class="image--center mx-auto" /></p>
<ul>
<li>From the <code>Use OS customisation?</code> dialog, I click the <code>EDIT SETTINGS</code> button to open the <code>OS Customisation</code> dialog:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750238326574/7eef9532-d867-4a26-99e9-6044bbdbfeec.png" alt class="image--center mx-auto" /></p>
<ul>
<li>In the <code>GENERAL</code> tab, I use the following settings:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750238431013/a499f217-9ee3-46bd-82c9-2f7635ca8a4b.png" alt class="image--center mx-auto" /></p>
<ul>
<li>In the <code>SERVICES</code> tab, I use the following settings:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750238539770/8973443b-f604-4108-8a03-04df31adfa72.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>NOTE: Setting the public key encryption for SSH will be described in a later post.</p>
</blockquote>
<ul>
<li>In the <code>OPTIONS</code> tab, I use the following settings:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750238705997/00586088-d71e-4797-a5a3-2f300b7efc50.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>I click the <code>SAVE</code> button to return to the previous dialog.</p>
</li>
<li><p>Back in the <code>Use OS customisation?</code> dialog, I click the <code>YES</code> button.</p>
</li>
<li><p>I read the <code>Warning</code> dialog, then click the <code>YES</code> button:</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750239099595/57ae6f33-fd6e-4952-a4c5-bb7eb573d255.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>The Raspberry Pi Imager installs Ubuntu Server 24.4.2 LTS (64-bit) onto the M.2 NVMe drive.</p>
</li>
<li><p>After the image is written to the M.2 NVMe drive, the M.2 NVMe enclosure is automatically ejected from my PC:</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750239587367/10048509-d223-40c3-a97e-128d3e057a48.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>NOTE: The M.2 NVMe enclosure is automatically ejected from my PC due to this setting that was enabled earlier:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750238705997/00586088-d71e-4797-a5a3-2f300b7efc50.png" alt /></p>
</blockquote>
<ul>
<li><p>I unplug the M.2 NVMe enclosure from my PC.</p>
</li>
<li><p>I remove the M.2 NVMe drive from the M.2 NVMe enclosure.</p>
</li>
<li><p>I install the M.2 NVMe drive into the M.2 NVMe hat that I previously added to my Pi5 SBC.</p>
</li>
<li><p>I repeat this process 2 more times (along with making appropriate changes to the settings):</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750240797053/5e0f9434-799f-4faf-a678-a607f83db6a1.jpeg" alt class="image--center mx-auto" /></p>
<blockquote>
<p>NOTE: Now that the M.2 NVMe drives have been imaged and installed, I now need to provide the Pi5 SBCs with a new boot order.</p>
</blockquote>
<hr />
<h2 id="heading-creating-a-microsd-card-bootloader">Creating a MicroSD Card Bootloader.</h2>
<blockquote>
<p>NOTE: For this process, I use the Raspberry Imager software, a microSD card and a USB card reader.</p>
</blockquote>
<ul>
<li><p>I insert a microSD card into the USB card reader.</p>
</li>
<li><p>I insert the USB card reader into my PC.</p>
</li>
<li><p>I start the Raspberry Pi Imager:</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753067748978/5a7ba162-401a-4952-bf03-da3972ef502b.png" alt class="image--center mx-auto" /></p>
<ul>
<li>I set the <code>Raspberry Pi Device</code> to <code>Raspberry Pi 5</code>:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753072466146/ba250138-712d-4107-88f7-b3f8710bfd8a.png" alt class="image--center mx-auto" /></p>
<ul>
<li>I set the <code>Operating System</code> to <code>Misc utility images &gt; Bootloader (Pi 5 family) &gt; SD Card Boot</code>:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753072665479/8c46e23a-2afc-4a62-8b51-c0d0adf8342f.png" alt class="image--center mx-auto" /></p>
<ul>
<li>I set the <code>Storage</code> to the <code>SD Card Reader - 31.3 GB</code>:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753072746388/817d11f1-b8ba-40c1-a903-a648a690d41b.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>NOTE: 31.3 GB is the usable capacity of the microSD card inserted in the SD Card Reader.</p>
</blockquote>
<ul>
<li>I click the <code>NEXT</code> button:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753072843724/00e79116-ba7e-4624-af60-9354b58f0baa.png" alt class="image--center mx-auto" /></p>
<ul>
<li>I read the <code>Warning</code> dialog and then click the <code>YES</code> button:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753074354974/d6621b7d-2e5e-4340-95fe-c2005b28c6a1.png" alt class="image--center mx-auto" /></p>
<ul>
<li>It takes less than 20-seconds for the Raspberry Pi Imager to finish:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753074136559/0c1f2717-b912-4296-8c57-f685b0700b7e.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>I remove the USB reader from my PC.</p>
</li>
<li><p>I remove the microSD card from the USB card reader.</p>
</li>
</ul>
<hr />
<h2 id="heading-changing-the-pi5-boot-order">Changing the Pi5 Boot Order.</h2>
<ul>
<li><p>I insert the microSD card into the Pi5 microSD card slot.</p>
</li>
<li><p>I power on the Pi5.</p>
</li>
<li><p>I wait for the microSD card to rewrite the bootloader in the EEPROM of the Pi5.</p>
</li>
</ul>
<blockquote>
<p>NOTE: Rewriting the settings for the bootloader only takes a few seconds.</p>
</blockquote>
<ul>
<li><p>A successful rewrite is shown by a blinking, green LED and a green screen displayed on the monitor. (A failed process is denoted by a blinking, red LED and a red screen displayed on the monitor.)</p>
</li>
<li><p>I repeat this process for the remaining Pi5 SBCs.</p>
</li>
</ul>
<blockquote>
<p>NOTE: If there is no bootable microSD card in the Pi5, then it will try to boot from an M.2 NVMe drive. If there is no bootable M.2 NVMe drive, the it will try to boot from a USB drive.</p>
</blockquote>
<hr />
<h2 id="heading-what-is-a-raspberry-pi">What is a Raspberry Pi?</h2>
<p>A Raspberry Pi is a single-board computer, or SBC, that is a little larger than a credit card, specifically 85mm x 56mm in size. It looks like a miniature motherboard but all the components, like the CPU, memory, wireless module, USB ports, and the network port are already (and permanently) installed. Hardware upgrades are enabled through the use of ‘hats’. A hat is a circuit board that is installed on top of, or below, the Raspberry Pi. Hats have pins that electrically connect to the header of the Pi, and may also include a ribbon cable that also provides a connection to the SBC.</p>
<hr />
<h2 id="heading-assembling-a-pi5-that-supports-an-m2-nvme-drive">Assembling a Pi5 that Supports an M.2 NVMe Drive.</h2>
<ul>
<li><p>If installed, I remove the passive heat sink from the Pi5 CPU.</p>
</li>
<li><p>I install the active cooler and attach the fan cable to the fan header of the Pi5.</p>
</li>
<li><p>I remove a specific screw from the active cooler fan.</p>
</li>
<li><p>I connect the NVMe hat to the Pi5 header and use the included long screw to attach it to the active cooler fan.</p>
</li>
<li><p>I use the NVMe ribbon cable to connect the NVMe hat to the Pi5.</p>
</li>
<li><p>I install the M.2 NVMe drive, the drive with Ubuntu Server 24.4 LTS installed, into the NVMe hat and use the included short screw to secure the drive.</p>
</li>
<li><p>On the case, I pop out the case fan because it will get in the way of the NVMe hat.</p>
</li>
<li><p>I install the assembled Pi5 into the Pi case.</p>
</li>
<li><p>I repeat this process 2 more times:</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750298738801/ee9af3c0-a2ab-4773-be2e-5fed79ad57a7.jpeg" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-setting-up-the-local-terminal">Setting Up the Local Terminal.</h2>
<p>The following describes setting up the remote PiLab servers, two of which will become the control plane nodes of the MicroK8s cluster. These settings must also be applied to the <a target="_blank" href="https://solodev.app/installing-proxmox-ve-on-an-intel-nuc-10#heading-creating-a-new-account-for-the-container">eight NucLab containers</a> that will become the worker nodes of the MicroK8s cluster.</p>
<hr />
<h2 id="heading-creating-an-rsa-key-pair-on-the-local-pc">Creating an RSA Key Pair on the Local PC.</h2>
<ul>
<li>From my local, PC terminal (<code>CTRL</code> + <code>ALT</code> + <code>T</code>), I start the ssh-agent:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">eval</span> <span class="hljs-string">"<span class="hljs-subst">$(ssh-agent -s)</span>"</span>
</code></pre>
<ul>
<li>I generate a pair of RSA keys called "/home/brian/.ssh/key-name" (where I replace "key-name" with the name of the remote server):</li>
</ul>
<pre><code class="lang-bash">ssh-keygen -b 4096
</code></pre>
<blockquote>
<p>NOTE: It is my convention to name RSA keys after the remote server on which they will be used.</p>
</blockquote>
<ul>
<li>I add the SSH key to my workstation account (where I replace "key-name" with the <em>actual</em> name of the ssh key):</li>
</ul>
<pre><code class="lang-bash">ssh-add /home/brian/.ssh/key-name
</code></pre>
<hr />
<h2 id="heading-uploading-the-public-key-to-the-remote-server">Uploading the Public Key to the Remote Server.</h2>
<ul>
<li>From the <code>workstation</code> terminal (<code>CTRL</code> + <code>ALT</code> + <code>T</code>), I use "ssh-copy-id" to upload the locally-generated public key to the remote container (where I replace "container-name" with the <em>actual</em> name of the container):</li>
</ul>
<pre><code class="lang-bash">ssh-copy-id -i /home/brian/.ssh/container-name.pub yt@192.168.?.?
</code></pre>
<blockquote>
<p>NOTE: I replace the "?" with the actual IP address for the remote server.</p>
</blockquote>
<hr />
<h2 id="heading-logging-in-to-the-remote-server">Logging In to the Remote Server.</h2>
<ul>
<li>From the terminal (CTRL + ALT + T), I login to the account of the remote server:</li>
</ul>
<pre><code class="lang-bash">ssh <span class="hljs-string">'yt@192.168.?.?'</span>
</code></pre>
<blockquote>
<p>NOTE: I replace the "?" with the actual IP address for the remote server.</p>
</blockquote>
<hr />
<h2 id="heading-updating-the-remote-server">Updating the Remote Server.</h2>
<ul>
<li>From the terminal, I update the remote server:</li>
</ul>
<pre><code class="lang-python">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<hr />
<h2 id="heading-hardening-the-remote-server">Hardening the Remote Server.</h2>
<ul>
<li>From the terminal (CTRL + ALT + T) that is connected to the remote server, I open the "sshd_config" file:</li>
</ul>
<pre><code class="lang-bash">sudo nano /etc/ssh/sshd_config
</code></pre>
<ul>
<li>I add (CTRL + V) the following to the bottom of the "sshd_config" page, save (CTRL + S) the changes, and exit (CTRL + X) the Nano text editor:</li>
</ul>
<pre><code class="lang-bash">PasswordAuthentication no
PermitRootLogin no
Protocol 2
</code></pre>
<ul>
<li>I restart the "ssh" service:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl restart ssh.service
</code></pre>
<hr />
<h2 id="heading-enabling-and-setting-up-ufw-on-the-remote-server">Enabling, and Setting Up, UFW on the Remote Server.</h2>
<ul>
<li>From the PC terminal (<code>CTRL</code> + <code>ALT</code> + <code>T</code>) that is connected to the remote server, I check the UFW status:</li>
</ul>
<pre><code class="lang-bash">sudo ufw status
</code></pre>
<ul>
<li>I enable the UFW:</li>
</ul>
<pre><code class="lang-bash">sudo ufw <span class="hljs-built_in">enable</span>
</code></pre>
<ul>
<li>I install a UFW rule:</li>
</ul>
<pre><code class="lang-bash">sudo ufw allow from 192.168.?.?
</code></pre>
<blockquote>
<p>NOTE: I can use <code>ip a</code> or <code>ip addr</code> in my local PC terminal to find my IP address. <strong><em>I replace the IP address above with the actual address for the</em></strong> <code>workstation</code><strong><em>, e.g. 192.168.188.41.</em></strong></p>
</blockquote>
<ul>
<li>I check the status of the UFW and list the rules by number:</li>
</ul>
<pre><code class="lang-bash">sudo ufw status numbered
</code></pre>
<blockquote>
<p>NOTE 1: UFW will, by default, block all incoming traffic, including SSH and HTTP.</p>
<p>NOTE 2: I will update the UFW rules as I deploy other services to the remote server.</p>
</blockquote>
<ul>
<li>I can delete a UFW rule by number if needed:</li>
</ul>
<pre><code class="lang-bash">sudo ufw delete 1
</code></pre>
<ul>
<li>I can also disable UFW if needed:</li>
</ul>
<pre><code class="lang-bash">sudo ufw <span class="hljs-built_in">disable</span>
</code></pre>
<hr />
<h2 id="heading-installing-and-setting-up-fail2ban-on-the-remote-server">Installing, and Setting Up, Fail2Ban on the Remote Server.</h2>
<ul>
<li>From the terminal (CTRL + ALT + T) that is connected to the remote server, I install Fail2Ban:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y fail2ban
</code></pre>
<ul>
<li>I copy the <code>jail.conf</code> file as <code>jail.local</code>:</li>
</ul>
<pre><code class="lang-bash">sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
</code></pre>
<ul>
<li>I open the <code>jail.local</code> file in Nano:</li>
</ul>
<pre><code class="lang-bash">sudo nano /etc/fail2ban/jail.local
</code></pre>
<ul>
<li>I make the following changes to a few (SSH-centric) settings in the <code>jail.local</code> file, then I save (CTRL + S) those changes, and exit (CTRL + X) the Nano text editor:</li>
</ul>
<pre><code class="lang-bash">[DEFAULT]
⋮
bantime = 1d
maxretry = 3
⋮
[sshd]
enabled = <span class="hljs-literal">true</span>
port = ssh,22
</code></pre>
<ul>
<li>I restart Fail2Ban:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl restart fail2ban
</code></pre>
<ul>
<li>I check the status of Fail2Ban:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl status fail2ban
</code></pre>
<ul>
<li>I enable Fail2Ban to auto-start on boot:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl <span class="hljs-built_in">enable</span> fail2ban
</code></pre>
<ul>
<li>I reboot the remote server:</li>
</ul>
<pre><code class="lang-bash">sudo reboot
</code></pre>
<hr />
<h2 id="heading-my-use-case">My Use Case.</h2>
<p>My plan is to use PiLab51 as the primary control plane, PiLab52 as the secondary control plane, and PiLab53 as a general server for hosting GitLab CE, PiHole, and other services. The NucLab systems will host ProxMox VE which, themselves, will host four containers each. These eight containers (total) will act as the worker nodes which, themselves, will host the pods. Although a pod is the smallest deployable unit that can contain one or more containers, the three PiLab SBCs, along with the eight ProxMox containers, collectively make up my local cluster.</p>
<hr />
<h2 id="heading-the-results">The Results.</h2>
<p>Transitioning from microSD cards to M.2 NVMe drives for my Pi5 SBCs significantly enhanced the storage capabilities and performance of my Raspberry Pi setup. By following the steps outlined in this post, I successfully installed Ubuntu Server 24.04 LTS on multiple M.2 NVMe drives, paving the way for more robust and efficient use of the Pi5 platform in various applications, including HA (high availability) clustering. This upgrade not only improved data handling and processing speed but also provided a more reliable and scalable solution for my projects. As I continue to explore and implement these technologies, I will adapt and customize my setup to meet my specific needs and objectives.</p>
<hr />
<h2 id="heading-in-conclusion">In Conclusion.</h2>
<p>I learned how to enhance the storage capabilities my Raspberry Pi 5 SBCs by replacing microSD cards with M.2 NVMe drives. This post covered my prerequisites, the installation steps, and suggested a practical application where Pi5 SBCs can be used in an HA cluster. The Pi5 platform is a potentially strong and flexible, though expensive, resource for Homelab enthusiasts.</p>
<p>Until next time: Be safe, be kind, be awesome.</p>
<hr />
<h2 id="heading-hash-tags">Hash Tags.</h2>
<p>#RaspberryPi #Pi5 #SingleBoardComputer #M2Drive #UbuntuServer #ServerInstallation</p>
<p>#Kubernetes #HighAvailability #Homelab #StorageSolutions #OpenSource #DIYTech</p>
<p>#TechTutorial #TechUpgrade #TechEnthusiast</p>
]]></content:encoded></item><item><title><![CDATA[Generating a Report with CrewAI.]]></title><description><![CDATA[TL;DR.
The post provides a step-by-step guide on deploying and utilising CrewAI agents by following a part of Tyler Reed's YouTube tutorial. This article emphasises learning by doing, documenting the process, and sharing my insights. I highlight the ...]]></description><link>https://solodev.app/generating-a-report-with-crewai</link><guid isPermaLink="true">https://solodev.app/generating-a-report-with-crewai</guid><category><![CDATA[Learning-By-Doing]]></category><category><![CDATA[Practical-AI]]></category><category><![CDATA[CrewAI]]></category><category><![CDATA[AI]]></category><category><![CDATA[AI Projects]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[AI Research]]></category><category><![CDATA[AI community]]></category><category><![CDATA[ai integration]]></category><category><![CDATA[Tech Tutorial]]></category><category><![CDATA[tech blog]]></category><category><![CDATA[open source]]></category><category><![CDATA[VS Code]]></category><category><![CDATA[Ubuntu]]></category><dc:creator><![CDATA[Brian King]]></dc:creator><pubDate>Wed, 07 May 2025 10:00:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747204119988/dd659f7a-15f1-4c26-a38b-feaf0a405a96.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-tldr">TL;DR.</h2>
<p>The post provides a step-by-step guide on deploying and utilising CrewAI agents by following a part of Tyler Reed's YouTube tutorial. This article emphasises learning by doing, documenting the process, and sharing my insights. I highlight the importance of experimenting and creating posts that enhance my understanding of CrewAI's functionalities.</p>
<blockquote>
<p><strong>Attributions:</strong></p>
<p><a target="_blank" href="https://www.youtube.com/watch?v=ONKOXwucLvE&amp;list=PLZSsCXMJ6wBI4yU5YvxuGZXfeTP8CoHo5&amp;index=1&amp;t=931s&amp;pp=gAQBiAQB">https://www.youtube.com/watch?v=ONKOXwucLvE</a> <strong><em>from</em></strong> <a target="_blank" href="https://www.youtube.com/@TylerReedAI"><strong><em>Tyler AI</em></strong></a> <strong><em>↗.</em></strong></p>
</blockquote>
<hr />
<h2 id="heading-an-introduction">An Introduction.</h2>
<p>Although I will be deploying CrewAI agents, my main objective is to show the results of converting hands-on processes into documentation:</p>
<blockquote>
<p>The purpose of this post is to show how practical experiences are captured as documented events.</p>
</blockquote>
<hr />
<h2 id="heading-the-big-picture">The Big Picture.</h2>
<p>This is the first project (in a series of projects) where I cover part of a YouTube video from Tyler Reed.</p>
<p><img src="https://yt3.ggpht.com/tzaY1P_SQIJ6edyXBAtouVgG_4ydxgCYalMsba6ZIXPR9PTZ5xtFBy858mT6jcZYWDzjcNmfPcM=s88-c-k-c0x00ffffff-no-rj" alt /></p>
<p><a target="_blank" href="https://www.youtube.com/@TylerReedAI">Tyler AI</a></p>
<p>If you are new to CrewAI (like me), then I <strong>STRONGLY</strong> suggest you do <em>EXACTLY</em> what I did: Follow Tyler’s instructions, make detailed notes about his process, turn your notes into a blog post, and publish your results. You will learn by doing <em>and</em> you will end up with a post that you can reference. Copying and pasting my post is <em>not</em> a learning process. You will ultimately cheat yourself, and fail to understand how CrewAI works.</p>
<p>Here is a link to Tyler’s video:</p>
<p><a target="_blank" href="https://www.youtube.com/watch?v=ONKOXwucLvE&amp;t=0">https://www.youtube.com/watch?v=ONKOXwucLvE&amp;t=0</a> <strong><em>↗.</em></strong></p>
<hr />
<h2 id="heading-prerequisites">Prerequisites.</h2>
<ul>
<li><p>A Debian-based Linux distro (I use Ubuntu),</p>
</li>
<li><p><a target="_blank" href="https://solodev.app/local-ai-toolkit-for-developers#heading-what-is-crewai">CrewAI</a>.</p>
</li>
</ul>
<hr />
<h2 id="heading-updating-my-base-system">Updating my Base System.</h2>
<ul>
<li>From the (base) terminal, I update my (base) system:</li>
</ul>
<pre><code class="lang-python">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<blockquote>
<p>NOTE: The Ollama LLM manager is already installed on my (base) system.</p>
</blockquote>
<hr />
<h2 id="heading-what-is-the-report-project">What is the Report Project?</h2>
<p>The Report Project shows how to deploy existing tools for CrewAI agents to use. A list of existing tools can be found within the <a target="_blank" href="https://docs.crewai.com/tools/">CrewAI documentation</a> <strong><em>↗</em></strong>.</p>
<hr />
<h3 id="heading-creating-the-report-project">Creating the Report Project.</h3>
<ul>
<li>From the terminal, I navigate to my projects directory:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> ~/AI
</code></pre>
<ul>
<li>I start a new project:</li>
</ul>
<pre><code class="lang-bash">crewai create crew report
</code></pre>
<ul>
<li><p>From the first list of options, I choose the ollama option.</p>
</li>
<li><p>From the next list of options, I choose the ollama/llama3.1 option.</p>
</li>
</ul>
<h3 id="heading-opening-the-report-project-in-vs-code">Opening the Report Project in VS Code.</h3>
<ul>
<li>I open the Report Project in VS Code:</li>
</ul>
<pre><code class="lang-bash">code ./report
</code></pre>
<h3 id="heading-editing-the-env-file">Editing the <code>.env</code> File.</h3>
<ul>
<li>I open the <code>.env</code> file and replace the contents with the following:</li>
</ul>
<pre><code class="lang-nix"><span class="hljs-attr">OPENAI_API_BASE=https://openrouter.ai/api/v1</span> <span class="hljs-comment"># openrouter url here</span>
<span class="hljs-attr">OPENAI_MODEL_NAME=openrouter/google/gemini-2.0-flash-001</span> <span class="hljs-comment"># openrouter/path/model_name here</span>
<span class="hljs-attr">OPENROUTER_API_KEY=your_api_key</span> <span class="hljs-comment"># API key here</span>
</code></pre>
<blockquote>
<p>NOTE: By default, these settings provide this project with access to the OpenRouter.ai API.</p>
</blockquote>
<h3 id="heading-editing-the-mainpy-file">Editing the <code>main.py</code> File.</h3>
<ul>
<li><p>Under the <code>src/report</code> directory, I open the <code>main.py</code> file.</p>
</li>
<li><p>I replace the contents of the <code>main.py</code> file with the following:</p>
</li>
</ul>
<pre><code class="lang-python"><span class="hljs-comment">#!/usr/bin/env python</span>
<span class="hljs-keyword">import</span> sys
<span class="hljs-keyword">import</span> warnings

<span class="hljs-keyword">from</span> datetime <span class="hljs-keyword">import</span> datetime

<span class="hljs-keyword">from</span> crew <span class="hljs-keyword">import</span> Report

warnings.filterwarnings(<span class="hljs-string">"ignore"</span>, category=SyntaxWarning, module=<span class="hljs-string">"pysbd"</span>)

<span class="hljs-comment"># This main file is intended to be a way for you to run your</span>
<span class="hljs-comment"># crew locally, so refrain from adding unnecessary logic into this file.</span>
<span class="hljs-comment"># Replace with inputs you want to test with, it will automatically</span>
<span class="hljs-comment"># interpolate any tasks and agents information</span>

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">run</span>():</span>
    <span class="hljs-string">"""
    Run the crew.
    """</span>
    inputs = {
        <span class="hljs-string">'topic'</span>: <span class="hljs-string">'AI LLMs'</span>,
        <span class="hljs-string">'current_year'</span>: str(datetime.now().year)
    }

    <span class="hljs-keyword">try</span>:
        Report().crew().kickoff(inputs=inputs)
    <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
        <span class="hljs-keyword">raise</span> Exception(<span class="hljs-string">f"An error occurred while running the crew: <span class="hljs-subst">{e}</span>"</span>)

run()
</code></pre>
<p><strong>What changed:</strong></p>
<ul>
<li><p>Near the top, I changed <code>from report.crew import Report</code> to <code>from crew import Report</code>,</p>
</li>
<li><p>I deleted the <code>train()</code>, <code>replay()</code>, and <code>test()</code> functions, and</p>
</li>
<li><p>At the bottom of the file, I added the <code>run()</code> function.</p>
</li>
</ul>
<h3 id="heading-editing-the-crewpy-file">Editing the <code>crew.py</code> File.</h3>
<ul>
<li><p>Under the <code>src/report</code> directory, I open the <code>crew.py</code> file.</p>
</li>
<li><p>I replace the contents of the <code>crew.py</code> file with the following:</p>
</li>
</ul>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> crewai <span class="hljs-keyword">import</span> Agent, Crew, Process, Task, LLM
<span class="hljs-keyword">from</span> crewai.project <span class="hljs-keyword">import</span> CrewBase, agent, crew, task
<span class="hljs-keyword">from</span> dotenv <span class="hljs-keyword">import</span> load_dotenv

load_dotenv()

<span class="hljs-comment"># If you want to run a snippet of code before or after the crew starts,</span>
<span class="hljs-comment"># you can use the @before_kickoff and @after_kickoff decorators</span>
<span class="hljs-comment"># https://docs.crewai.com/concepts/crews#example-crew-class-with-decorators</span>

<span class="hljs-meta">@CrewBase</span>
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Report</span>():</span>
    <span class="hljs-string">"""Report crew"""</span>

    <span class="hljs-comment"># Learn more about YAML configuration files here:</span>
    <span class="hljs-comment"># Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended</span>
    <span class="hljs-comment"># Tasks: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended</span>
    agents_config = <span class="hljs-string">'config/agents.yaml'</span>
    tasks_config = <span class="hljs-string">'config/tasks.yaml'</span>

    ollama_llm = LLM(
        model = <span class="hljs-string">'ollama/deepseek-r1:14b'</span>,
        base_url = <span class="hljs-string">'http://localhost:11434'</span>
    )

    <span class="hljs-comment"># If you would like to add tools to your agents, you can learn more about it here:</span>
    <span class="hljs-comment"># https://docs.crewai.com/concepts/agents#agent-tools</span>
<span class="hljs-meta">    @agent</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">researcher</span>(<span class="hljs-params">self</span>) -&gt; Agent:</span>
        <span class="hljs-keyword">return</span> Agent(
            config=self.agents_config[<span class="hljs-string">'researcher'</span>],
            verbose=<span class="hljs-literal">True</span>,
            llm = self.ollama_llm
        )

<span class="hljs-meta">    @agent</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">reporting_analyst</span>(<span class="hljs-params">self</span>) -&gt; Agent:</span>
        <span class="hljs-keyword">return</span> Agent(
            config=self.agents_config[<span class="hljs-string">'reporting_analyst'</span>],
            verbose=<span class="hljs-literal">True</span>,
            llm = self.ollama_llm
        )

    <span class="hljs-comment"># To learn more about structured task outputs,</span>
    <span class="hljs-comment"># task dependencies, and task callbacks, check out the documentation:</span>
    <span class="hljs-comment"># https://docs.crewai.com/concepts/tasks#overview-of-a-task</span>
<span class="hljs-meta">    @task</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">research_task</span>(<span class="hljs-params">self</span>) -&gt; Task:</span>
        <span class="hljs-keyword">return</span> Task(
            config=self.tasks_config[<span class="hljs-string">'research_task'</span>],
        )

<span class="hljs-meta">    @task</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">reporting_task</span>(<span class="hljs-params">self</span>) -&gt; Task:</span>
        <span class="hljs-keyword">return</span> Task(
            config=self.tasks_config[<span class="hljs-string">'reporting_task'</span>],
            output_file=<span class="hljs-string">'report.md'</span>
        )

<span class="hljs-meta">    @crew</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">crew</span>(<span class="hljs-params">self</span>) -&gt; Crew:</span>
        <span class="hljs-string">"""Creates the Report crew"""</span>
        <span class="hljs-comment"># To learn how to add knowledge sources to your crew, check out the documentation:</span>
        <span class="hljs-comment"># https://docs.crewai.com/concepts/knowledge#what-is-knowledge</span>

        <span class="hljs-keyword">return</span> Crew(
            agents=self.agents, <span class="hljs-comment"># Automatically created by the @agent decorator</span>
            tasks=self.tasks, <span class="hljs-comment"># Automatically created by the @task decorator</span>
            process=Process.sequential,
            verbose=<span class="hljs-literal">True</span>,
            <span class="hljs-comment"># process=Process.hierarchical, # In case you wanna use that instead https://docs.crewai.com/how-to/Hierarchical/</span>
        )
</code></pre>
<p><strong>What changed:</strong></p>
<ul>
<li><p>At the top of the page and the end of the line <code>from crewai import Agent, Crew, Process, Task</code> I added <code>LLM</code>,</p>
</li>
<li><p>Near the top, I added <code>from dotenv import load_dotenv</code> and <code>load_dotenv()</code>, and</p>
</li>
<li><p>Within the Report() class, after <code>tasks_config = 'config/tasks.yaml'</code> I added the following:</p>
<pre><code class="lang-python">  ollama_llm = LLM(
          model = <span class="hljs-string">'ollama/deepseek-r1:14b'</span>,
          base_url = <span class="hljs-string">'http://localhost:11434'</span>
      )
</code></pre>
</li>
<li><p>Within the <code>researcher()</code> and <code>reporting_analyst()</code> agents, I added <code>llm = self.ollama_llm</code> to override the OpenRouter.ai settings in the <code>.env</code> file.</p>
</li>
</ul>
<h3 id="heading-editing-the-agentsyaml-file">Editing the <code>agents.yaml</code> File.</h3>
<ul>
<li><p>Under the <code>src/report/config</code> directory, I open the <code>agents.yaml</code> file.</p>
</li>
<li><p>I replace the contents of the <code>agents.yaml</code> file with the following:</p>
</li>
</ul>
<pre><code class="lang-python">researcher:
  role: &gt;
    {topic} Senior Data Researcher
  goal: &gt;
    Uncover cutting-edge developments <span class="hljs-keyword">in</span> {topic}
  backstory: &gt;
    You are a seasoned researcher <span class="hljs-keyword">with</span> a knack <span class="hljs-keyword">for</span> uncovering the latest
    developments <span class="hljs-keyword">in</span> {topic}. Known <span class="hljs-keyword">for</span> your ability to find the most relevant
    information <span class="hljs-keyword">and</span> present it <span class="hljs-keyword">in</span> a clear <span class="hljs-keyword">and</span> concise manner.

reporting_analyst:
  role: &gt;
    {topic} Reporting Analyst
  goal: &gt;
    Create detailed reports based on {topic} data analysis <span class="hljs-keyword">and</span> research findings
  backstory: &gt;
    You are a meticulous analyst <span class="hljs-keyword">with</span> a keen eye <span class="hljs-keyword">for</span> detail. You are known <span class="hljs-keyword">for</span>
    your ability to turn complex data into clear <span class="hljs-keyword">and</span> concise reports, making
    it easy <span class="hljs-keyword">for</span> others to understand <span class="hljs-keyword">and</span> act on the information you provide.
</code></pre>
<p><strong>What changed:</strong></p>
<ul>
<li>No changes made.</li>
</ul>
<h3 id="heading-editing-the-tasksyaml-file">Editing the <code>tasks.yaml</code> File.</h3>
<ul>
<li><p>Under the <code>src/report/config</code> directory, I open the <code>tasks.py</code> file.</p>
</li>
<li><p>I replace the contents of the <code>tasks.py</code> file with the following:</p>
</li>
</ul>
<pre><code class="lang-python">research_task:
  description: &gt;
    Conduct a thorough research about {topic}
    Make sure you find any interesting <span class="hljs-keyword">and</span> relevant information given
    the current year <span class="hljs-keyword">is</span> {current_year}.
  expected_output: &gt;
    A list <span class="hljs-keyword">with</span> <span class="hljs-number">10</span> bullet points of the most relevant information about {topic}
  agent: researcher

reporting_task:
  description: &gt;
    Review the context you got <span class="hljs-keyword">and</span> expand each topic into a full section <span class="hljs-keyword">for</span> a report.
    Make sure the report <span class="hljs-keyword">is</span> detailed <span class="hljs-keyword">and</span> contains any <span class="hljs-keyword">and</span> all relevant information.
  expected_output: &gt;
    A fully fledged report <span class="hljs-keyword">with</span> the main topics, each <span class="hljs-keyword">with</span> a full section of information.
    Formatted <span class="hljs-keyword">as</span> markdown without <span class="hljs-string">'```'</span>
  agent: reporting_analyst
</code></pre>
<p><strong>What changed:</strong></p>
<ul>
<li>No changes made.</li>
</ul>
<hr />
<h2 id="heading-the-results">The Results.</h2>
<p>This CrewAI project provides a comprehensive guide to deploying and utilising CrewAI agents effectively. By watching Tyler Reed's video, following his instructions, and documenting the process, I gained a deeper understanding of CrewAI's functionalities. This hands-on approach not only enhanced my learning but also resulted in a valuable reference post. As I embark on this journey, I must experiment, document my findings, and share my insights with the community.</p>
<hr />
<h2 id="heading-in-conclusion">In Conclusion.</h2>
<p>I discovered how to deploy, and utilise, CrewAI agents. I followed along with (some of) Tyler Reed's video tutorial, learned by doing, and enhanced my understanding of CrewAI's functionalities. This process is perfect for me as I look to document my journey while sharing my insights.</p>
<p>Until next time: Be safe, be kind, be awesome.</p>
<hr />
<h2 id="heading-hash-tags">Hash Tags.</h2>
<p>#CrewAI #AI #AI-Projects #AI-Tools #AI-Research #AI-Community</p>
<p>#AI-Integration #Tech-Tutorial #Tech-Blog #Open-Source</p>
<p>#VS-Code #Ubuntu #Learning-By-Doing #Practical-AI</p>
]]></content:encoded></item><item><title><![CDATA[Local AI Toolkit for Ubuntu.]]></title><description><![CDATA[Last update: Wednesday 17th September 2025Last update: Sunday 21st September 2025
TL;DR.
This post is a guide to setting up a local AI toolkit on a Debian-based Linux distribution, specifically Ubuntu. It covers the installation and management of var...]]></description><link>https://solodev.app/local-ai-toolkit-for-ubuntu</link><guid isPermaLink="true">https://solodev.app/local-ai-toolkit-for-ubuntu</guid><category><![CDATA[AI]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[AI development]]></category><category><![CDATA[AI models]]></category><category><![CDATA[AI community]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[Deep Learning]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Ubuntu]]></category><category><![CDATA[open source]]></category><category><![CDATA[coding]]></category><category><![CDATA[Tech Guide ]]></category><category><![CDATA[Tech tools]]></category><category><![CDATA[future tech]]></category><category><![CDATA[innovation]]></category><dc:creator><![CDATA[Brian King]]></dc:creator><pubDate>Tue, 06 May 2025 10:00:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1746921051526/d469b47f-6f93-4f61-b561-fd72aa6fcce1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last update: Wednesday 17th September 2025<br />Last update: Sunday 21st September 2025</p>
<h2 id="heading-tldr">TL;DR.</h2>
<p>This post is a guide to setting up a local AI toolkit on a Debian-based Linux distribution, specifically Ubuntu. It covers the installation and management of various AI tools and frameworks, including:</p>
<ul>
<li><p>Ollama,</p>
</li>
<li><p>CrewAI,</p>
</li>
<li><p>Crawl4AI,</p>
</li>
<li><p>LLM Axe,</p>
</li>
<li><p>AI Extensions for VS Code,</p>
</li>
<li><p>OpenRouter.ai,</p>
</li>
<li><p>Cursor,</p>
</li>
<li><p>Windsurf,</p>
</li>
<li><p>LM Studio,</p>
</li>
<li><p>Open WebUI,</p>
</li>
<li><p>Pinokio, and</p>
</li>
<li><p>Ngrok.</p>
</li>
</ul>
<p>This guide includes step-by-step instructions for installing, updating, and uninstalling these tools, as well as promoting virtual environments (specifically <a target="_blank" href="https://solodev.app/installing-miniconda">Conda</a>) for best practices. Also, this post occasionally offers insights into these tools and how they are utilised during AI development.</p>
<p>Although this post contains many AI tools, the only ones I actually use are Ollama, CrewAI, Crawl4AI, the LLM Axe toolkit, the AI extensions for VS Code, and the OpenRouter.ai website.</p>
<blockquote>
<p><strong>Attributions:</strong></p>
<p><strong><em>various ↗.</em></strong></p>
</blockquote>
<hr />
<h2 id="heading-an-introduction">An Introduction.</h2>
<p>This post highlights the tools I find useful as an AI enthusiast. I want to share my insights into these tools and explain why they are beneficial to AI developers:</p>
<blockquote>
<p>The purpose of this post is to describe the tools that I, as an AI enthusiast, find useful.</p>
</blockquote>
<hr />
<h2 id="heading-the-big-picture">The Big Picture.</h2>
<p>November 30th 2022 saw the launch of AI’s first killer app: ChatGPT (Generative Pre-trained Transformer). Three months later, in February 2023, someone “leaked“ the first, open LLM (Large Language Model) which was built by Meta, the corporation that owns Facebook. Since then, many open models have dropped and Hugging Face has become the de facto centre for these AI machines.</p>
<blockquote>
<p>NOTE: From here on, I will refer to LLMs as models.</p>
</blockquote>
<p>It is now early 2025 and, over the last two and-a-bit years, there has been an explosion of tools and workflows hitting the Internet. The continued growth of commercial, frontier models (e.g. OpenAI models such as GPT-4o, GPT-4o mini, and the GPT-4.1 series) has easily matched, and usually surpassed, the power of my favourite open models (DeepSeek-R1, Phi4, Qwen3, etc.) However, thanks to the rise of MoE (Mixture of Experts) models, reasoning models, agents, agentic tasks and tools, RAG (Retrieval-Augmented Generation) processes, and AI-specific scraping tools, AI enthusiasts can easily develop vibe coding skills <em>on local PCs</em> using open models, frontier models, or a combination of both types of systems.</p>
<hr />
<h2 id="heading-prerequisites">Prerequisites.</h2>
<ul>
<li><p>A Debian-based Linux distribution (I use Ubuntu),</p>
</li>
<li><p>Python 3.11+.</p>
</li>
</ul>
<hr />
<h2 id="heading-updating-my-base-system">Updating my Base System.</h2>
<ul>
<li>From the (base) terminal, I update my (base) system:</li>
</ul>
<pre><code class="lang-python">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<hr />
<h2 id="heading-what-is-ollama">What is Ollama?</h2>
<p>Ollama is a local, large language model manager that facilitates the installation, management, and operation of AI models on personal computers.</p>
<h3 id="heading-installing-ollama">Installing Ollama.</h3>
<ul>
<li>From the terminal, I install Ollama:</li>
</ul>
<pre><code class="lang-bash">curl https://ollama.com/install.sh | sh
</code></pre>
<ul>
<li>I run Ollama as a background service:</li>
</ul>
<pre><code class="lang-bash">ollama serve &amp;
</code></pre>
<blockquote>
<p>NOTE: An error typically displays (see below) because Ollama, by default, <em>already</em> runs as a background service. Also, by default, Ollama runs on port 11434.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745578527337/acba5663-7582-4eba-8f4b-6475a7bcba16.png" alt class="image--center mx-auto" /></p>
</blockquote>
<h3 id="heading-updating-ollama">Updating Ollama.</h3>
<ul>
<li>I update Ollama:</li>
</ul>
<pre><code class="lang-bash">curl https://ollama.com/install.sh | sh
</code></pre>
<blockquote>
<p>NOTE: The update command is the same as the install command.</p>
</blockquote>
<h3 id="heading-pulling-models-from-ollama">Pulling Models from Ollama.</h3>
<ul>
<li>I pull the following models from Ollama (it will take time for these downloads to complete):</li>
</ul>
<pre><code class="lang-bash">ollama pull nomic-embed-text:v1.5 &amp;&amp;
ollama pull falcon3:10b &amp;&amp;
ollama pull cogito:14b &amp;&amp;
ollama pull tulu3:8b &amp;&amp;
ollama pull olmo2:13b &amp;&amp;
ollama pull marco-o1:7b &amp;&amp;
ollama pull smallthinker:3b &amp;&amp;
ollama pull openthinker:7b &amp;&amp;
ollama pull dolphin3:8b &amp;&amp;
ollama pull exaone3.5:7.8b &amp;&amp;
ollama pull exaone-deep:7.8b &amp;&amp;
ollama pull granite3.2-vision:2b &amp;&amp;
ollama pull granite3.3:8b &amp;&amp;
ollama pull gemma3:12b &amp;&amp;
</code></pre>
<pre><code class="lang-bash">ollama pull llama3.1:8b &amp;&amp;
ollama pull llama3.2:3b &amp;&amp;
ollama pull llama3.2-vision:11b &amp;&amp;
ollama pull phi4-mini-reasoning:3.8b &amp;&amp;
ollama pull phi4-reasoning:14b &amp;&amp;
ollama pull phi4-mini:3.8b &amp;&amp;
ollama pull phi4:14b &amp;&amp;
ollama pull qwen3:14b &amp;&amp;
ollama pull qwen2.5vl:7b &amp;&amp;
ollama pull opencoder:8b &amp;&amp;
ollama pull deepcoder:14b &amp;&amp;
ollama pull codellama:13b &amp;&amp;
ollama pull qwen2.5-coder:14b &amp;&amp;
ollama pull deepseek-coder-v2:16b &amp;&amp;
ollama pull deepseek-r1:14b &amp;&amp;
ollama pull gpt-oss:20b
</code></pre>
<blockquote>
<p>NOTE: Theses models are perfect for running on my GTX3060 GPU with 12GB VRAM. Also, this list is continually updated. If a newer version of a model is released, I remove the old listing above, replace it with the new listing, run the pull commands, list the installed models (<code>ollama ls</code>), and remove (<code>ollama rm &lt;model-name&gt;</code>) the older model.</p>
</blockquote>
<h3 id="heading-listing-the-ai-models">Listing the AI Models.</h3>
<ul>
<li>I list the models downloaded by Ollama:</li>
</ul>
<pre><code class="lang-bash">ollama ls
</code></pre>
<h3 id="heading-running-an-ai-model">Running an AI Model.</h3>
<ul>
<li>I run one of the listed AI models:</li>
</ul>
<pre><code class="lang-bash">ollama run deepseek-r1:14b
</code></pre>
<h3 id="heading-stopping-an-ai-model">Stopping an AI Model.</h3>
<ul>
<li>I stop running the AI model by typing the following prompt:</li>
</ul>
<pre><code class="lang-bash">/<span class="hljs-built_in">bye</span>
</code></pre>
<h3 id="heading-uninstalling-ollama">Uninstalling Ollama.</h3>
<ul>
<li>I stop the Ollama service:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl stop ollama
</code></pre>
<ul>
<li>I disable the Ollama service:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl <span class="hljs-built_in">disable</span> ollama
</code></pre>
<ul>
<li>I remove the Ollama service:</li>
</ul>
<pre><code class="lang-bash">sudo rm /etc/systemd/system/ollama.service
</code></pre>
<ul>
<li>I remove the Ollama binary from my bin directory:</li>
</ul>
<pre><code class="lang-bash">sudo rm $(<span class="hljs-built_in">which</span> ollama)
</code></pre>
<ul>
<li>I remove the models downloaded by Ollama:</li>
</ul>
<pre><code class="lang-bash">sudo rm -r /usr/share/ollama
</code></pre>
<ul>
<li>I delete the Ollama service user:</li>
</ul>
<pre><code class="lang-bash">sudo userdel ollama
</code></pre>
<ul>
<li>I delete the Ollama service group:</li>
</ul>
<pre><code class="lang-bash">sudo groupdel ollama
</code></pre>
<hr />
<h2 id="heading-what-is-crewai">What is CrewAI?</h2>
<p>CrewAI is a framework that is used to build AI crews, agents, tasks, and tools.</p>
<h3 id="heading-installing-crewai">Installing CrewAI.</h3>
<blockquote>
<p>NOTE: Best practice involves installing this app in a virtual environment, using either venv or <a target="_blank" href="https://solodev.app/installing-miniconda">Conda</a>.</p>
</blockquote>
<ul>
<li>From the terminal, I use the pip command to install CrewAI and its’ tools:</li>
</ul>
<pre><code class="lang-bash">pip install crewai <span class="hljs-string">'crewai[tools]'</span>
</code></pre>
<hr />
<h2 id="heading-what-is-crawl4ai">What is Crawl4AI?</h2>
<p>Crawl4AI is a web-scraping tool that can conform website data into a format that can be understood by AI models, or the data can be prepared for uploading to a vector database (pending).</p>
<h3 id="heading-installing-crawl4ai">Installing Crawl4AI.</h3>
<blockquote>
<p>NOTE: Best practice involves installing this app in a virtual environment, using either venv or <a target="_blank" href="https://solodev.app/installing-miniconda">Conda</a>.</p>
</blockquote>
<ul>
<li>I use the pip command to install the updated version of Crawl4AI:</li>
</ul>
<pre><code class="lang-python">pip install -U crawl4ai
</code></pre>
<ul>
<li>I setup Crawl4AI:</li>
</ul>
<pre><code class="lang-python">crawl4ai-setup
</code></pre>
<ul>
<li>I verify the Crawl4AI installation:</li>
</ul>
<pre><code class="lang-python">crawl4ai-doctor
</code></pre>
<hr />
<h2 id="heading-what-is-llm-axe">What is LLM Axe?</h2>
<p>LLM Axe is a toolkit that provides simple abstractions for commonly used LLM functions.</p>
<h3 id="heading-installing-llm-axe">Installing LLM Axe.</h3>
<ul>
<li>I use pip to install LLM Axe:</li>
</ul>
<pre><code class="lang-bash">pip install llm-axe
</code></pre>
<hr />
<h2 id="heading-what-are-ai-extensions-for-vs-code">What are AI Extensions for VS Code?</h2>
<p>VS Code is a FREE, open-source IDE (integrated development environment) from Microsoft. Extensions are add-ons that are installed within VS Code. These extensions provide extra functionality without having to make changes to the VS Code source code. AI extensions for VS Code are tools and integrations that connect to local, and remote, AI models.</p>
<h3 id="heading-some-popular-ai-extensions-for-vs-code">Some Popular AI Extensions for VS Code.</h3>
<p>Some of my favourite AI Extensions for VS Code include:</p>
<ul>
<li><p>Twinny,</p>
</li>
<li><p>Roo Code,</p>
</li>
<li><p>Cline, and</p>
</li>
<li><p>Continue.</p>
</li>
</ul>
<p>Other Extensions I find useful include:</p>
<ul>
<li><p>AsciiDoc from AsciiDoctor,</p>
</li>
<li><p>AsciiDoctor PDF from AsciiDoctor,</p>
</li>
<li><p>vscode-pdf from tomoki1207, and</p>
</li>
<li><p>Code Spell Checker from Street Side Software.</p>
</li>
</ul>
<blockquote>
<p>NOTE: I have a post that covers the installation of <a target="_blank" href="https://solodev.app/local-system-toolkit-for-ubuntu#heading-installing-vs-code">VS Code and some of its’ Extensions</a>.</p>
</blockquote>
<hr />
<h2 id="heading-what-is-openrouterai">What is OpenRouter.ai?</h2>
<p><a target="_blank" href="https://openrouter.ai/">OpenRouter</a> <strong><em>↗</em></strong> provides a single API for accessing numerous models from multiple providers.</p>
<h3 id="heading-open-models-vs-openrouter">Open Models vs. OpenRouter.</h3>
<p>I prefer downloading open models to my local system, and then using these models on my personal computer. Using local models ensures my data does not leak, and my constructs are not used to train other models. Sometimes, however, I need to run complex inference. In those cases, I will use better models that run on powerful hardware. This is why I sometimes switch to using OpenRouter.</p>
<p>Choosing to use OpenRouter over open models depends on my requirements at each stage of a project.</p>
<hr />
<h2 id="heading-what-is-cursor">What is Cursor?</h2>
<p>Cursor is a VS Code fork that provides an improved AI development experience.</p>
<blockquote>
<p>NOTE: The following process can be used to install any AppImage app.</p>
</blockquote>
<h3 id="heading-installing-cursor">Installing Cursor.</h3>
<blockquote>
<p>ATTRIBUTION: <a target="_blank" href="https://forum.cursor.com/t/tutorial-install-cursor-permanently-when-appimage-install-didnt-work-on-linux/7712">https://forum.cursor.com/t/tutorial-install-cursor-permanently-when-appimage-install-didnt-work-on-linux/7712</a></p>
</blockquote>
<ul>
<li><p>From a browser, I download the AppImage file from the <a target="_blank" href="https://www.cursor.com">cursor.com</a> website.</p>
</li>
<li><p>From the file manager, I move the Cursor app to the NAS (network attached storage) server.</p>
</li>
<li><p>From the terminal, I create the Apps/Cursor directory:</p>
</li>
</ul>
<pre><code class="lang-bash">mkdir ~/Apps/Cursor
</code></pre>
<ul>
<li>I copy the Cursor download to the the Apps/Cursor directory:</li>
</ul>
<pre><code class="lang-bash">cp ~/Downloads/Ubuntu/Cursor/Cursor*.AppImage ~/Apps/Cursor/Cursor.AppImage
</code></pre>
<blockquote>
<p>NOTE: This copy command changes the download name to Cursor.AppImage.</p>
</blockquote>
<ul>
<li><p>OPTIONAL:</p>
</li>
<li><p>OPTIONAL: I copy the 128px by 128px logo to the ~/Apps/Cursor directory:</p>
</li>
</ul>
<pre><code class="lang-bash">cp /media/brian/Drawings/logos/cursor/Images/Cursor-icon.png ~/Apps/Cursor/Cursor-icon.png
</code></pre>
<blockquote>
<p>NOTE: I downloaded this image from the Internet.</p>
</blockquote>
<ul>
<li>I make the AppImage executable:</li>
</ul>
<pre><code class="lang-bash">chmod +x ~/Apps/Cursor/Cursor.AppImage
</code></pre>
<ul>
<li>I use the Nano text editor to create a desktop entry:</li>
</ul>
<pre><code class="lang-bash">nano ~/.<span class="hljs-built_in">local</span>/share/applications/cursor.desktop
</code></pre>
<ul>
<li>I paste (CTRL + SHIFT + V) the following into the desktop entry:</li>
</ul>
<pre><code class="lang-bash">[Desktop Entry]
Name=Cursor
Exec=~/Apps/Cursor/Cursor.AppImage
Icon=~/Apps/Cursor/Cursor-icon.png
Type=Application
Categories=Utility;Development;
</code></pre>
<ul>
<li><p>I save (CTRL + S) the changes, and exit (CTRL + X) the Nano text editor.</p>
</li>
<li><p>I create a symlink to start Cursor from the terminal:</p>
</li>
</ul>
<pre><code class="lang-bash">sudo ln -s ~/Apps/Cursor/Cursor.AppImage /usr/<span class="hljs-built_in">local</span>/bin/cursor
</code></pre>
<ul>
<li><p>I restart my system.</p>
</li>
<li><p>From the terminal, I run the Cursor IDE:</p>
</li>
</ul>
<pre><code class="lang-bash">cursor
</code></pre>
<hr />
<h2 id="heading-what-is-windsurf">What is Windsurf?</h2>
<p>Windsurf is an AI-powered code editor designed to enhance the coding experience with features like automatic context analysis, AI-driven autocompletion, and an intuitive user interface.</p>
<h3 id="heading-installing-windsurf">Installing Windsurf.</h3>
<ul>
<li>I add the Windsurf repo to my local system:</li>
</ul>
<pre><code class="lang-bash">curl -fsSL <span class="hljs-string">"https://windsurf-stable.codeiumdata.com/wVxQEIWkwPUEAGf3/windsurf.gpg"</span> | sudo gpg --dearmor -o /usr/share/keyrings/windsurf-stable-archive-keyring.gpg
<span class="hljs-built_in">echo</span> <span class="hljs-string">"deb [signed-by=/usr/share/keyrings/windsurf-stable-archive-keyring.gpg arch=amd64] https://windsurf-stable.codeiumdata.com/wVxQEIWkwPUEAGf3/apt stable main"</span> | sudo tee /etc/apt/sources.list.d/windsurf.list &gt; /dev/null
</code></pre>
<ul>
<li>I update my system:</li>
</ul>
<pre><code class="lang-bash">sudo apt update
</code></pre>
<ul>
<li>I upgrade Windsurf:</li>
</ul>
<pre><code class="lang-bash">sudo apt upgrade windsurf
</code></pre>
<hr />
<h2 id="heading-what-is-lm-studio">What is LM Studio?</h2>
<p>LM Studio is an AI application for running local AI models.</p>
<blockquote>
<p>NOTE: The following process can be used to install any AppImage app.</p>
</blockquote>
<h3 id="heading-installing-lm-studio">Installing LM Studio.</h3>
<ul>
<li><p>I use a browser to download the AppImage file from the <a target="_blank" href="https://lmstudio.ai/download">https://lmstudio.ai/download</a> website.</p>
</li>
<li><p>From the file manager, I move the LMStudio app to the NAS (network attached storage) server.</p>
</li>
<li><p>From the terminal, I create the Apps/LMStudio directory:</p>
</li>
</ul>
<pre><code class="lang-bash">mkdir ~/Apps/LMStudio
</code></pre>
<ul>
<li>I copy the LMStudio download to the the Apps/LMStudio directory:</li>
</ul>
<pre><code class="lang-bash">cp /media/brian/Downloads/Ubuntu/<span class="hljs-string">'LM Studio'</span>/LM*.AppImage ~/Apps/LMStudio/LMStudio.AppImage
</code></pre>
<blockquote>
<p>NOTE: This copy command changes the download name to LMStudio.AppImage.</p>
</blockquote>
<ul>
<li>I copy the 128px by 128px logo from the NAS to the ~/Apps/LMStudio directory:</li>
</ul>
<pre><code class="lang-bash">cp /media/brian/Drawings/logos/lmstudio/Images/LMStudio-icon.png ~/Apps/LMStudio/LMStudio-icon.png
</code></pre>
<blockquote>
<p>NOTE: I downloaded this image from the Internet.</p>
</blockquote>
<ul>
<li>I make the AppImage executable:</li>
</ul>
<pre><code class="lang-bash">chmod +x ~/Apps/LMStudio/LMStudio.AppImage
</code></pre>
<ul>
<li>I use the Nano text editor to create a desktop entry:</li>
</ul>
<pre><code class="lang-bash">nano ~/.<span class="hljs-built_in">local</span>/share/applications/lmstudio.desktop
</code></pre>
<ul>
<li>I paste (CTRL + SHIFT + V) the following into the desktop entry:</li>
</ul>
<pre><code class="lang-bash">[Desktop Entry]
Name=Cursor
Exec=~/Apps/LMStudio/LMStudio.AppImage
Icon=~/Apps/LMStudio/LMStudio-icon.png
Type=Application
Categories=Utility;Development;
</code></pre>
<ul>
<li><p>I save (CTRL + S) the changes, and exit (CTRL + X) the Nano text editor.</p>
</li>
<li><p>I create a symbolic link (-s) to start Cursor from the terminal:</p>
</li>
</ul>
<pre><code class="lang-bash">sudo ln -s ~/Apps/LMStudio/LMStudio.AppImage /usr/<span class="hljs-built_in">local</span>/bin/lmstudio
</code></pre>
<ul>
<li>I run LMStudio:</li>
</ul>
<pre><code class="lang-bash">lmstudio
</code></pre>
<hr />
<h2 id="heading-what-is-open-webui">What is Open WebUI?</h2>
<p>Open WebUI is a browser-based interface for using AI models.</p>
<h3 id="heading-installing-open-webui">Installing Open WebUI.</h3>
<ul>
<li>From the terminal, I use Docker to pull Open WebUI:</li>
</ul>
<pre><code class="lang-bash">sudo docker pull ghcr.io/open-webui/open-webui:main
</code></pre>
<ul>
<li>Now I can run Open WebUI on port 3000 from within the Docker container:</li>
</ul>
<pre><code class="lang-bash">docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main
</code></pre>
<ul>
<li>Alternatively, I can start Open WebUI from Docker Desktop.</li>
</ul>
<hr />
<h2 id="heading-what-is-pinokio">What is Pinokio?</h2>
<p>Pinokio is a browser that runs applications.</p>
<h3 id="heading-installing-pinokio">Installing Pinokio.</h3>
<ul>
<li>I use a browser to visit the download page for Pinokio:</li>
</ul>
<pre><code class="lang-bash">https://github.com/pinokiocomputer/pinokio/releases
</code></pre>
<ul>
<li><p>I download the latest version of Pinokio that runs on Debian (.deb) distributions for AMD64 processors.</p>
</li>
<li><p>From the terminal, I install Pinokio:</p>
</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y ~/Downloads/Pinokio*
</code></pre>
<ul>
<li>I run Pinokio from the Apps Drawer.</li>
</ul>
<hr />
<h2 id="heading-what-is-ngrok">What is Ngrok?</h2>
<p>Ngrok creates secure tunnels to localhost, allowing developers to expose local servers to the Internet.</p>
<h3 id="heading-installing-ngrok">Installing Ngrok.</h3>
<ul>
<li><p>In a browser, I visit the <a target="_blank" href="https://ngrok.com/">https://ngrok.com/</a> website and create an account.</p>
</li>
<li><p>I copy the authentication token.</p>
</li>
<li><p>From a new terminal tab, I install Ngrok:</p>
</li>
</ul>
<pre><code class="lang-bash">sudo snap install ngrok
</code></pre>
<ul>
<li>I add the authentication token:</li>
</ul>
<pre><code class="lang-bash">ngrok config add-authtoken &lt;auth_token&gt;
</code></pre>
<ul>
<li>I point Ngrok to any local port number that is running a server:</li>
</ul>
<pre><code class="lang-bash">ngrok http &lt;local_port_number&gt;
</code></pre>
<hr />
<h2 id="heading-the-results">The Results.</h2>
<p>Setting up a local AI toolkit significantly improves my chances of improving my AI development skills. By following this guide, I efficiently installed, can manage, and will utilise various AI tools and frameworks. Combining Ollama, CrewAI, Pinokio, Open WebUI, LM Studio, Ngrok, and Cursor provides a robust environment for developing and experimenting with open AI models. This well-equipped, local toolkit will help me develop my vibe coding skills.</p>
<hr />
<h2 id="heading-in-conclusion">In Conclusion.</h2>
<p>In this post, I described how to set up a powerful local AI toolkit on Ubuntu with step-by-step instructions. I learned how to install and manage AI tools like Ollama, CrewAI, and more, enhancing my AI development capabilities. This article is perfect for AI enthusiasts and developers looking to harness both open and commercial AI technologies.</p>
<p>Until next time: Be safe, be kind, be awesome.</p>
<hr />
<h2 id="heading-hash-tags">Hash Tags.</h2>
<p>#AI #Artificial-Intelligence #AI-Development #AI-Models #AI-Community #Machine-Learning #Deep-Learning #Linux #Ubuntu #Open-Source #Coding #Tech-Guide #Tech-Tools #Future-Tech #Innovation</p>
]]></content:encoded></item><item><title><![CDATA[Local System Toolkit for Ubuntu.]]></title><description><![CDATA[Latest update: 15th February 2026
TL;DR.
This post provides a guide to setting up a personalised Ubuntu 24.04 LTS system with essential applications and utilities. It covers software installation for productivity, development, and media tasks, includ...]]></description><link>https://solodev.app/local-system-toolkit-for-ubuntu</link><guid isPermaLink="true">https://solodev.app/local-system-toolkit-for-ubuntu</guid><category><![CDATA[Ubuntu]]></category><category><![CDATA[Ubuntu Studio]]></category><category><![CDATA[Linux]]></category><category><![CDATA[open source]]></category><category><![CDATA[Productivity]]></category><category><![CDATA[software development]]></category><category><![CDATA[Blender]]></category><category><![CDATA[DaVinci Resolve Studio]]></category><category><![CDATA[VS Code]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Spotify]]></category><category><![CDATA[innovation]]></category><category><![CDATA[Tech community]]></category><category><![CDATA[Tech tools]]></category><dc:creator><![CDATA[Brian King]]></dc:creator><pubDate>Mon, 05 May 2025 10:00:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1746440588224/03f98290-7290-4d41-ae55-bda8523b31bf.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Latest update: 15th February 2026</p>
<h2 id="heading-tldr">TL;DR.</h2>
<p>This post provides a guide to setting up a personalised Ubuntu 24.04 LTS system with essential applications and utilities. It covers software installation for productivity, development, and media tasks, including package managers, partition tools, and specialised applications like VS Code and DaVinci Resolve Studio. This guide emphasises the importance of keeping my system updated while exploring new tools that optimise workflows, supports daily tasks, and helps to achieve long-term goals.</p>
<blockquote>
<p><strong>Attributions:</strong></p>
<p><strong><em>Various ↗.</em></strong></p>
</blockquote>
<hr />
<h2 id="heading-an-introduction">An Introduction.</h2>
<p>Setting up a practical Ubuntu system requires a collection of apps and utilities that meet my needs. The applications and services I list below are used to adapt the Ubuntu 24.04 LTS distribution to my requirements.</p>
<blockquote>
<p>The purpose of this post is to identify some apps and utilities that I find useful.</p>
</blockquote>
<hr />
<h2 id="heading-the-big-picture">The Big Picture.</h2>
<p>Software that is installed on my PC defines what I want to achieve with my computer. I run a dual-boot PC that allows me to boot into Ubuntu or Windows.</p>
<p>Ubuntu is my daily driver where I spend my time writing these blog posts, and practising my SysOps skills. It is also where I will develop my skills as a software developer and AI generalist.</p>
<p>I also run Windows-only programs like the Reason DAW (digital audio workstation) and the Anadigm Designer 2 computer-aided design tool.</p>
<p>A PC is a generic tool where the installed software defines what the user is capable of achieving.</p>
<hr />
<h2 id="heading-prerequisites">Prerequisites.</h2>
<ul>
<li>A Debian-based Linux distro (I use Ubuntu).</li>
</ul>
<hr />
<h2 id="heading-updating-the-system">Updating the System.</h2>
<p>APT (Advanced Package Tool) handles the installation and removal of software on Debian and Debian-based Linux distributions. The following commands are used to keep my Ubuntu system (a Debian-based Linux distro) up-to-date.</p>
<ul>
<li>From the terminal, I update my system:</li>
</ul>
<pre><code class="lang-bash">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<ul>
<li>I go to Settings &gt; System &gt; Software Updates to update my system.</li>
</ul>
<hr />
<h2 id="heading-activating-ubuntu-pro">Activating Ubuntu Pro.</h2>
<p>Ubuntu Pro is a subscription service that extends the Long Term Support (LTS) versions of Ubuntu from 5 years to 10 years (and soon to be 12 years).</p>
<ul>
<li><p>In the Apps Drawer, I click on the blue, <code>Additional Drivers</code> icon to start the utility.</p>
</li>
<li><p>In the <code>Additional Drivers</code> window, I click on the <code>Ubuntu Pro</code> tab.</p>
</li>
<li><p>I click on the <code>Enable Ubuntu Pro</code> button.</p>
</li>
<li><p>I can create an account, or login to an existing Ubuntu One account.</p>
</li>
<li><p>The <code>Enable Ubuntu Pro</code> window generates a token.</p>
</li>
<li><p>I take the token to <a target="_blank" href="https://ubuntu.com/pro/attach">https://ubuntu.com/pro/attach</a>.</p>
</li>
<li><p>It takes a moment to successfully apply the subscription.</p>
</li>
</ul>
<blockquote>
<p>NOTE: Canonical provides up to five (5) FREE tokens for personal PCs.</p>
</blockquote>
<hr />
<h2 id="heading-my-terminal-settings">My Terminal Settings.</h2>
<p>A terminal is a text window where system commands are issued.</p>
<ul>
<li><p>I go to Preferences &gt; Unnamed &gt; Text tab,</p>
</li>
<li><p>I set the <code>Initial terminal size</code> to 80 columns and 24 rows,</p>
</li>
<li><p>I set the <code>Custom font</code> to Monospace at 20pt,</p>
</li>
<li><p>I set the <code>Allow blinking text</code> to Never, and</p>
</li>
<li><p>I set the <code>Cursor blinking</code> to Disabled.</p>
</li>
</ul>
<blockquote>
<p>NOTE: These settings make it easier to see the commands I use.</p>
</blockquote>
<hr />
<h2 id="heading-connecting-my-system-to-my-nas">Connecting My System to My NAS.</h2>
<blockquote>
<p>NOTE: Safely ignore this section if a NAS explainer is unnecessary.</p>
</blockquote>
<h3 id="heading-changing-the-owner-of-my-system-images">Changing the Owner of My System Images.</h3>
<blockquote>
<p>NOTE: An image is a snapshot of my system. Due to my app development process, I sometimes end up with a flaky system. Using an image to restore my system to a previous state sidesteps the need to reinstall my OS and all of the apps. Creating images is an easy process when using CloneZilla Live from a USB thumb drive. After creating the USB drive, I can boot into CloneZilla Live and start “cloning“ my system as an image. An external HDD is handy because the resulting image can not be saved to a system drive that is being cloned. I use a simple naming convention, e.g. ‘datetime-img-clean-win10-ubu24’ where: ‘datetime-img-‘ is automatically prepended to the name, ‘clean‘ refers to a fresh installation, ‘win10‘ identifies Windows 10, and ‘ubu24‘ is a reference to Ubuntu Desktop 24.04.x LTS.</p>
</blockquote>
<ul>
<li><p>I power up my PC.</p>
</li>
<li><p>In the file manager, I navigate to the external drive that holds my system images.</p>
</li>
<li><p>I access the root directory of the drive because that is where I save my images when they are made.</p>
</li>
<li><p>I right-click the file manager and select <code>Open in Terminal</code> from the pop-up menu.</p>
</li>
<li><p>From the terminal, I change the owner of an image:</p>
</li>
</ul>
<pre><code class="lang-bash">sudo chown -R <span class="hljs-variable">$USER</span>:<span class="hljs-variable">$USER</span> datetime-img-simplified-name-of-contents
</code></pre>
<h3 id="heading-installing-network-utilities">Installing Network Utilities.</h3>
<blockquote>
<p>NOTE: CIFS is a network file-sharing protocol that allows Linux systems to access Windows shares. Smbclient is a command-line tool that allows users to access and interact with SMB/CIFS file shares, commonly used in Windows environments, and Samba servers.</p>
</blockquote>
<ul>
<li>I install the CIFS utilities:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y cifs-utils smbclient
</code></pre>
<blockquote>
<p>NOTE: CIFS is a dialect of SMB.</p>
</blockquote>
<h3 id="heading-removing-the-system-directories">Removing the System Directories.</h3>
<ul>
<li>I remove these directories:</li>
</ul>
<pre><code class="lang-bash">sudo rm -r ~/Desktop ~/Documents ~/Downloads ~/Music ~/Pictures ~/Public ~/Templates ~/Videos
</code></pre>
<blockquote>
<p>NOTE: It is now VITAL to continue this section until the very end. These deleted directories are important to the UI and UX of the desktop file manager.</p>
</blockquote>
<h3 id="heading-creating-a-credentials-file">Creating a Credentials File.</h3>
<ul>
<li>I make a hidden file called <code>.cred_smb</code> in my home directory:</li>
</ul>
<pre><code class="lang-bash">sudo touch /home/yt/.cred_smb
</code></pre>
<ul>
<li>I open the <code>.cred_smb</code> file using the Nano text editor:</li>
</ul>
<pre><code class="lang-bash">sudo nano /home/yt/.cred_smb
</code></pre>
<ul>
<li>I copy the following, add it (CTRL + SHIFT + V) to the <code>.cred_smb</code> file, save (CTRL + S) the changes, and exit (CTRL + X) Nano:</li>
</ul>
<pre><code class="lang-plaintext">username=yt
password=super-secret-password
domain=WORKGROUP
</code></pre>
<ul>
<li>I change the access permissions for the <code>.cred_smb</code> file:</li>
</ul>
<pre><code class="lang-bash">sudo chmod 600 ~/.cred_smb
</code></pre>
<h3 id="heading-altering-the-fstab-file">Altering the <code>fstab</code> File.</h3>
<ul>
<li><p>I use a QNAP utility called <a target="_blank" href="https://www.qnap.com/en/software/qfinder-pro">Qfinder Pro</a> that helps identify the IP address of my NAS.</p>
</li>
<li><p>I make a copy the <code>fstab</code> file as <code>fstab.bak</code>:</p>
</li>
</ul>
<pre><code class="lang-bash">sudo cp /etc/fstab /etc/fstab.bak
</code></pre>
<ul>
<li>I use the Nano text editor to open the <code>fstab</code> file:</li>
</ul>
<pre><code class="lang-bash">sudo nano /etc/fstab
</code></pre>
<ul>
<li>I copy the following, add it (CTRL + SHIFT + V) to the bottom of the <code>fstab</code> file, save (CTRL + S) the changes, and exit (CTRL + X) Nano:</li>
</ul>
<pre><code class="lang-plaintext">//192.168.0.2/ai             /media/yt/AI cifs vers=3.0,uid=1000,gid=1000,credentials=/home/yt/.cred_smb
//192.168.0.2/desktop        /media/yt/Desktop cifs vers=3.0,uid=1000,gid=1000,credentials=/home/yt/.cred_smb
//192.168.0.2/mydocs         /media/yt/Documents cifs vers=3.0,uid=1000,gid=1000,credentials=/home/yt/.cred_smb
//192.168.0.2/downloads      /media/yt/Downloads cifs vers=3.0,uid=1000,gid=1000,credentials=/home/yt/.cred_smb
//192.168.0.2/drawings       /media/yt/Drawings cifs vers=3.0,uid=1000,gid=1000,credentials=/home/yt/.cred_smb
//192.168.0.2/images         /media/yt/Images cifs vers=3.0,uid=1000,gid=1000,credentials=/home/yt/.cred_smb
//192.168.0.2/multimedia     /media/yt/Media cifs vers=3.0,uid=1000,gid=1000,credentials=/home/yt/.cred_smb
//192.168.0.2/music          /media/yt/Music cifs vers=3.0,uid=1000,gid=1000,credentials=/home/yt/.cred_smb
//192.168.0.2/mydocs         /media/yt/MyDocs cifs vers=3.0,uid=1000,gid=1000,credentials=/home/yt/.cred_smb
//192.168.0.2/mydrive        /media/yt/MyDrive cifs vers=3.0,uid=1000,gid=1000,credentials=/home/yt/.cred_smb
//192.168.0.2/photos         /media/yt/Pictures cifs vers=3.0,uid=1000,gid=1000,credentials=/home/yt/.cred_smb
//192.168.0.2/public         /media/yt/Public cifs vers=3.0,uid=1000,gid=1000,credentials=/home/yt/.cred_smb
//192.168.0.2/screencasts    /media/yt/Screencasts cifs vers=3.0,uid=1000,gid=1000,credentials=/home/yt/.cred_smb
//192.168.0.2/screenshots    /media/yt/Screenshots cifs vers=3.0,uid=1000,gid=1000,credentials=/home/yt/.cred_smb
//192.168.0.2/templates      /media/yt/Templates cifs vers=3.0,uid=1000,gid=1000,credentials=/home/yt/.cred_smb
//192.168.0.2/videos         /media/yt/Videos cifs vers=3.0,uid=1000,gid=1000,credentials=/home/yt/.cred_smb
</code></pre>
<h3 id="heading-making-remote-share-directories">Making Remote Share Directories.</h3>
<ul>
<li>I create these directories where the remote shares will mount:</li>
</ul>
<pre><code class="lang-bash">sudo mkdir /media/yt/AI &amp;&amp; \
sudo mkdir /media/yt/Desktop &amp;&amp; \
sudo mkdir /media/yt/Documents &amp;&amp; \
sudo mkdir /media/yt/Downloads &amp;&amp; \
sudo mkdir /media/yt/Drawings &amp;&amp; \
sudo mkdir /media/yt/Images &amp;&amp; \
sudo mkdir /media/yt/Media &amp;&amp; \
sudo mkdir /media/yt/Music &amp;&amp; \
sudo mkdir /media/yt/MyDocs &amp;&amp; \
sudo mkdir /media/yt/MyDrive &amp;&amp; \
sudo mkdir /media/yt/Pictures &amp;&amp; \
sudo mkdir /media/yt/Public &amp;&amp; \
sudo mkdir /media/yt/Screencasts &amp;&amp; \
sudo mkdir /media/yt/Screenshots &amp;&amp; \
sudo mkdir /media/yt/Templates &amp;&amp; \
sudo mkdir /media/yt/Videos
</code></pre>
<h3 id="heading-creating-symbolic-links">Creating Symbolic Links.</h3>
<ul>
<li>I create these symlinks (symbolic links) where the remote shares will display:</li>
</ul>
<pre><code class="lang-bash">ln -s <span class="hljs-string">"/media/yt/AI"</span> <span class="hljs-string">"/home/yt/AI"</span>  &amp;&amp; \
ln -s <span class="hljs-string">"/media/yt/Desktop"</span> <span class="hljs-string">"/home/yt/Desktop"</span> &amp;&amp; \
ln -s <span class="hljs-string">"/media/yt/MyDocs"</span> <span class="hljs-string">"/home/yt/Documents"</span> &amp;&amp; \
ln -s <span class="hljs-string">"/media/yt/Downloads"</span> <span class="hljs-string">"/home/yt/Downloads"</span> &amp;&amp; \
ln -s <span class="hljs-string">"/media/yt/Drawings"</span> <span class="hljs-string">"/home/yt/Drawings"</span> &amp;&amp; \
ln -s <span class="hljs-string">"/media/yt/Images"</span> <span class="hljs-string">"/home/yt/Images"</span> &amp;&amp; \
ln -s <span class="hljs-string">"/media/yt/Media"</span> <span class="hljs-string">"/home/yt/Media"</span> &amp;&amp; \
ln -s <span class="hljs-string">"/media/yt/Music"</span> <span class="hljs-string">"/home/yt/Music"</span> &amp;&amp; \
ln -s <span class="hljs-string">"/media/yt/MyDocs"</span> <span class="hljs-string">"/home/yt/MyDocs"</span> &amp;&amp; \
ln -s <span class="hljs-string">"/media/yt/MyDrive"</span> <span class="hljs-string">"/home/yt/MyDrive"</span> &amp;&amp; \
ln -s <span class="hljs-string">"/media/yt/Pictures"</span> <span class="hljs-string">"/home/yt/Pictures"</span> &amp;&amp; \
ln -s <span class="hljs-string">"/media/yt/Public"</span> <span class="hljs-string">"/home/yt/Public"</span> &amp;&amp; \
ln -s <span class="hljs-string">"/media/yt/Screencasts"</span> <span class="hljs-string">"/home/yt/Screencasts"</span> &amp;&amp; \
ln -s <span class="hljs-string">"/media/yt/Screenshots"</span> <span class="hljs-string">"/home/yt/Screenshots"</span> &amp;&amp; \
ln -s <span class="hljs-string">"/media/yt/Templates"</span> <span class="hljs-string">"/home/yt/Templates"</span> &amp;&amp; \
ln -s <span class="hljs-string">"/media/yt/Videos"</span> <span class="hljs-string">"/home/yt/Videos"</span>
</code></pre>
<ul>
<li>I update my system:</li>
</ul>
<pre><code class="lang-bash">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<ul>
<li><p>I reboot my system.</p>
</li>
<li><p>I check the symlinks to my NAS.</p>
</li>
</ul>
<h3 id="heading-restoring-the-system-directories">Restoring the System Directories.</h3>
<ul>
<li>Once I have access to my NAS, I can edit the following file so the default system directories point to the equivalent share directories which, in turn, point to the NAS:</li>
</ul>
<pre><code class="lang-bash">sudo nano <span class="hljs-variable">$HOME</span>/.config/user-dirs.dirs
</code></pre>
<blockquote>
<p>NOTE: For example: XDG_DOWNLOAD_DIR="/media/yt/Downloads" points the system Downloads directory to the remote share directory I created earlier.</p>
</blockquote>
<hr />
<h2 id="heading-installing-gnome-tweaks">Installing GNOME Tweaks.</h2>
<p>GNOME Tweaks is a system utility for the GNOME desktop environment GUI.</p>
<ul>
<li>From the terminal, I install GNOME Tweaks:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y gnome-tweaks
</code></pre>
<ul>
<li>I run GNOME Tweaks:</li>
</ul>
<pre><code class="lang-bash">gnome-tweaks
</code></pre>
<ul>
<li>I right-click the icon in the Dock and select “Pin to Dash” from the pop-up menu.</li>
</ul>
<hr />
<h2 id="heading-installing-the-package-managers">Installing the Package Managers.</h2>
<p>Package managers are used to distribute apps and utilities.</p>
<blockquote>
<p>NOTE: A developer might also package their app as an AppImage.</p>
</blockquote>
<h3 id="heading-installing-the-snap-package-manager">Installing the Snap Package Manager.</h3>
<ul>
<li>From the terminal, I use <code>APT</code> to <code>install</code> the Snap daemon:</li>
</ul>
<pre><code class="lang-plaintext">sudo apt install -y snapd
</code></pre>
<ul>
<li>I use <code>Snap</code> to <code>install</code> the <code>Snap</code> package manager <code>core</code>:</li>
</ul>
<pre><code class="lang-plaintext">sudo snap install core
</code></pre>
<h3 id="heading-installing-the-flatpak-package-manager">Installing the Flatpak Package Manager.</h3>
<ul>
<li>From the terminal, I refresh the terminal:</li>
</ul>
<pre><code class="lang-bash">. ~/.bashrc
</code></pre>
<ul>
<li>I use the <code>APT</code> to <code>install</code> the <code>Flatpak</code> package manager:</li>
</ul>
<pre><code class="lang-plaintext">sudo apt install -y flatpak
</code></pre>
<hr />
<h2 id="heading-installing-the-partition-managers">Installing the Partition Managers.</h2>
<p>Partition managers are system utilities for HDDs and SSDs.</p>
<h3 id="heading-installing-gnome-disks">Installing GNOME Disks.</h3>
<p>GNOME Disks is the default, graphical, partition management tool on all the GNOME-based desktop environments. The following describes how to install the GNOME Disks utility, if required.</p>
<ul>
<li>From the terminal, I install GNOME Disks utility (if required):</li>
</ul>
<pre><code class="lang-bash">sudo apt -y install gnome-disk-utility
</code></pre>
<h3 id="heading-installing-gparted">Installing GParted.</h3>
<p>GParted, or GNOME Partition Editor, is an alternative to GNOME Disks. It is a free, graphical, partition management tool. The following describes how to install the GParted utility.</p>
<ul>
<li>From the terminal, I install the GParted utility:</li>
</ul>
<pre><code class="lang-bash">sudo apt -y install gparted
</code></pre>
<p>GParted can also be <a target="_blank" href="https://gparted.org/liveusb.php">installed onto a USB thumb drive</a><strong><em>↗</em></strong>.</p>
<h3 id="heading-installing-exfatprogs">Installing exfatprogs.</h3>
<p>exfatprogs allows partition management tools, like GNOME Disks and GParted, to use the exFAT file system when formatting partitions.</p>
<blockquote>
<p>NOTE: exFAT is a proprietary file system from Microsoft, was released in 2006, and is the successor to FAT32.</p>
</blockquote>
<ul>
<li>From the terminal, I install the exfatprogs library:</li>
</ul>
<pre><code class="lang-bash">sudo apt -y install exfatprogs
</code></pre>
<hr />
<h2 id="heading-installing-unzip">Installing UNZIP.</h2>
<p>UNZIP is a utility that is used to unpack compressed packages.</p>
<ul>
<li>From the terminal, I install UNZIP:</li>
</ul>
<pre><code class="lang-bash">sudo apt install unzip
</code></pre>
<hr />
<h2 id="heading-installing-curl">Installing Curl.</h2>
<p>curl is a command-line utility for transferring data to, or from, a remote server.</p>
<ul>
<li>From the terminal, I install Curl:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y apt-transport-https curl
</code></pre>
<hr />
<h2 id="heading-installing-wget">Installing Wget.</h2>
<p>Wget is a command-line utility for retrieving files using the HTTP or FTP protocols.</p>
<ul>
<li>From the terminal, I install Wget:</li>
</ul>
<pre><code class="lang-bash">sudo apt install wget
</code></pre>
<hr />
<h2 id="heading-installing-uv">Installing uv.</h2>
<ul>
<li>I run the following command to install the Python package and project manager called uv:</li>
</ul>
<pre><code class="lang-bash">curl -LsSf https://astral.sh/uv/install.sh | sh
</code></pre>
<hr />
<h2 id="heading-installing-miniconda">Installing Miniconda.</h2>
<ul>
<li>Check out this post to see <a target="_blank" href="https://solodev.app/installing-miniconda">how to install Miniconda</a>.</li>
</ul>
<hr />
<h2 id="heading-installing-lxd-and-using-lxcs">Installing LXD and Using LXCs.</h2>
<ul>
<li>Check out these posts to see how <a target="_blank" href="https://solodev.app/installing-lxd-and-using-lxcs">to install LXD</a> and <a target="_blank" href="https://solodev.app/creating-a-local-linux-container">create an LXC</a>.</li>
</ul>
<hr />
<h2 id="heading-installing-nodejs">Installing NodeJS.</h2>
<p>NodeJS is a server-side runtime that uses the V8 JavaScript engine.</p>
<ul>
<li>I download and install nvm:</li>
</ul>
<pre><code class="lang-bash">curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh | bash
</code></pre>
<ul>
<li>I restart the shell:</li>
</ul>
<pre><code class="lang-bash">\. <span class="hljs-string">"<span class="hljs-variable">$HOME</span>/.nvm/nvm.sh"</span>
</code></pre>
<ul>
<li>I download and install Node.js:</li>
</ul>
<pre><code class="lang-bash">nvm install 22
</code></pre>
<ul>
<li>I verify the Node.js version:</li>
</ul>
<pre><code class="lang-bash">node -v
</code></pre>
<ul>
<li>I verify npm version:</li>
</ul>
<pre><code class="lang-bash">npm -v
</code></pre>
<hr />
<h2 id="heading-installing-the-nodejs-package-managers">Installing the NodeJS Package Managers.</h2>
<p>Package managers are used to bundle code for distribution.</p>
<h3 id="heading-installing-npm">Installing NPM.</h3>
<ul>
<li>I install NPM:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y npm
</code></pre>
<ul>
<li>I verify the NPM installation:</li>
</ul>
<pre><code class="lang-bash">npm -v
</code></pre>
<h3 id="heading-installing-nvm">Installing NVM.</h3>
<ul>
<li>I download, and run, the NVM installation script:</li>
</ul>
<pre><code class="lang-bash">wget -q -O- https://raw.githubusercontent.com/nvm-sh/nvm/master/install.sh | bash
</code></pre>
<ul>
<li>I refresh my terminal:</li>
</ul>
<pre><code class="lang-bash">. ~/.bashrc
</code></pre>
<ul>
<li>I verify the NVM installation:</li>
</ul>
<pre><code class="lang-bash">nvm -v
</code></pre>
<h3 id="heading-installing-pnpm">Installing PNPM.</h3>
<ul>
<li>I install PNPM:</li>
</ul>
<pre><code class="lang-bash">sudo npm install -g pnpm
</code></pre>
<ul>
<li>I verify the PNPM installation:</li>
</ul>
<pre><code class="lang-bash">pnpm -v
</code></pre>
<h3 id="heading-installing-npx">Installing NPX.</h3>
<blockquote>
<p>NOTE: <a target="_blank" href="https://www.npmjs.com/package/npx">NPX</a> is now part of the NPM CLI.</p>
</blockquote>
<hr />
<h2 id="heading-installing-git">Installing Git.</h2>
<p>Git is a version control utility.</p>
<ul>
<li>From the terminal, I install Git:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y git-all
</code></pre>
<ul>
<li>I verify the installation:</li>
</ul>
<pre><code class="lang-bash">git -v
</code></pre>
<ul>
<li>I add my name:</li>
</ul>
<pre><code class="lang-bash">git config --global user.name <span class="hljs-string">"Brian King"</span>
</code></pre>
<ul>
<li>I add my email address:</li>
</ul>
<pre><code class="lang-bash">git config --global user.email <span class="hljs-string">"brian@digitalcore.co.nz"</span>
</code></pre>
<ul>
<li>I list the configuration settings:</li>
</ul>
<pre><code class="lang-bash">git config --list
</code></pre>
<hr />
<h2 id="heading-installing-the-github-utility">Installing the GitHub Utility.</h2>
<p>GitHub CLI is a tool that extends the capabilities of Git. Developers can interact with GitHub repositories, pull requests, issues, and workflows directly from their terminal.</p>
<h3 id="heading-installing-github-cli">Installing GitHub CLI.</h3>
<ul>
<li>From the terminal, I install GitHub CLI:</li>
</ul>
<pre><code class="lang-python">(type -p wget &gt;/dev/null || (sudo apt update &amp;&amp; sudo apt-get install wget -y)) \
    &amp;&amp; sudo mkdir -p -m <span class="hljs-number">755</span> /etc/apt/keyrings \
    &amp;&amp; out=$(mktemp) &amp;&amp; wget -nv -O$out https://cli.github.com/packages/githubcli-archive-keyring.gpg \
    &amp;&amp; cat $out | sudo tee /etc/apt/keyrings/githubcli-archive-keyring.gpg &gt; /dev/null \
    &amp;&amp; sudo chmod go+r /etc/apt/keyrings/githubcli-archive-keyring.gpg \
    &amp;&amp; echo <span class="hljs-string">"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main"</span> | sudo tee /etc/apt/sources.list.d/github-cli.list &gt; /dev/null \
    &amp;&amp; sudo apt update \
    &amp;&amp; sudo apt install gh -y
</code></pre>
<h3 id="heading-upgrading-github-cli">Upgrading GitHub CLI.</h3>
<ul>
<li>I upgrade GitHub CLI:</li>
</ul>
<pre><code class="lang-python">sudo apt update &amp;&amp; sudo apt install gh
</code></pre>
<h3 id="heading-authorising-github-cli">Authorising GitHub CLI.</h3>
<ul>
<li>I authorise GitLab CLI:</li>
</ul>
<pre><code class="lang-python">gh auth login
</code></pre>
<hr />
<h2 id="heading-installing-docker-ce">Installing Docker CE.</h2>
<p>Docker CE (Community Edition) is a container manager for app development and distribution.</p>
<blockquote>
<p><strong>Attribution:</strong></p>
<p><a target="_blank" href="https://linuxiac.com/how-to-install-docker-on-ubuntu-24-04-lts/">https://linuxiac.com/how-to-install-docker-on-ubuntu-24-04-lts/</a> <strong><em>↗.</em></strong></p>
</blockquote>
<ul>
<li>From the terminal, I add HTTPS and the Curl utility:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y apt-transport-https curl
</code></pre>
<ul>
<li>I import the Docker GPG repository key:</li>
</ul>
<pre><code class="lang-bash">curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
</code></pre>
<ul>
<li>I add the official Docker repository to my system:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"deb [arch=<span class="hljs-subst">$(dpkg --print-architecture)</span> signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu <span class="hljs-subst">$(. /etc/os-release &amp;&amp; echo <span class="hljs-string">"<span class="hljs-variable">$VERSION_CODENAME</span>"</span>)</span> stable"</span> | sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null
</code></pre>
<ul>
<li>I refresh my local repo list:</li>
</ul>
<pre><code class="lang-bash">sudo apt update &amp;&amp; sudo apt upgrade
</code></pre>
<ul>
<li>I install Docker:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
</code></pre>
<ul>
<li>I check to see if Docker is active:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl is-active docker
</code></pre>
<ul>
<li>I create the Docker group:</li>
</ul>
<pre><code class="lang-bash">sudo groupadd docker
</code></pre>
<ul>
<li>I add my account to the group:</li>
</ul>
<pre><code class="lang-bash">sudo usermod -aG docker <span class="hljs-variable">$USER</span>
</code></pre>
<ul>
<li>I test the installation:</li>
</ul>
<pre><code class="lang-bash">sudo docker run hello-world
</code></pre>
<h3 id="heading-uninstalling-docker-ce">Uninstalling Docker CE.</h3>
<blockquote>
<p><strong>Attributions:</strong></p>
<p><a target="_blank" href="https://learnubuntu.com/uninstall-docker/">https://learnubuntu.com/uninstall-docker/</a> <strong><em>↗.</em></strong></p>
</blockquote>
<ul>
<li>From the terminal, I stop the running Docker containers:</li>
</ul>
<pre><code class="lang-bash">docker stop $(docker ps -a -q)
</code></pre>
<ul>
<li>I remove the Docker containers:</li>
</ul>
<pre><code class="lang-bash">docker rm $(docker ps -a -q)
</code></pre>
<ul>
<li>I remove the Docker images:</li>
</ul>
<pre><code class="lang-bash">docker rmi $(docker images -a -q)
</code></pre>
<ul>
<li>I prune the custom Docker networks:</li>
</ul>
<pre><code class="lang-bash">docker network prune
</code></pre>
<ul>
<li>I prune the Docker containers, networks, images, cache and volumes:</li>
</ul>
<pre><code class="lang-bash">docker system prune -a
</code></pre>
<ul>
<li>I purge every Docker package:</li>
</ul>
<pre><code class="lang-bash">sudo apt purge docker-* containerd.io --auto-remove
</code></pre>
<ul>
<li>I remove the Docker files:</li>
</ul>
<pre><code class="lang-bash">sudo rm -rf /var/lib/docker
</code></pre>
<ul>
<li>I remove the Docker group:</li>
</ul>
<pre><code class="lang-bash">sudo groupdel docker
</code></pre>
<ul>
<li>I remove the Docker socket:</li>
</ul>
<pre><code class="lang-bash">sudo rm -rf /var/run/docker.sock
</code></pre>
<ul>
<li>I remove Docker Compose:</li>
</ul>
<pre><code class="lang-bash">sudo rm -rf /usr/<span class="hljs-built_in">local</span>/bin/docker-compose &amp;&amp; sudo rm -rf /etc/docker &amp;&amp; sudo rm -rf ~/.docker
</code></pre>
<hr />
<h2 id="heading-installing-docker-desktop">Installing Docker Desktop.</h2>
<p>Docker Desktop is a GUI for Docker.</p>
<blockquote>
<p><strong>Attributions:</strong></p>
<p><a target="_blank" href="https://docs.docker.com/desktop/setup/install/linux/ubuntu/">https://docs.docker.com/desktop/setup/install/linux/ubuntu/</a> <strong><em>↗,</em></strong></p>
<p><a target="_blank" href="https://dev.to/chandrashekhar/docker-desktop-is-not-working-on-ubuntu-2404-lts--2kpa">https://dev.to/chandrashekhar/docker-desktop-is-not-working-on-ubuntu-2404-lts--2kpa</a> <strong><em>↗.</em></strong></p>
</blockquote>
<ul>
<li>I download the latest version of Docker Desktop:</li>
</ul>
<p><a target="_blank" href="https://docs.docker.com/desktop/release-notes/">https://docs.docker.com/desktop/release-notes/</a></p>
<ul>
<li>From the terminal, I change to the <code>Downloads</code> directory:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> ~/Downloads
</code></pre>
<ul>
<li>I install the DEB package:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y ./docker-desktop*.deb
</code></pre>
<blockquote>
<p>NOTE: Ignore the error message after installation.</p>
</blockquote>
<ul>
<li>I fix the permissions issue:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">'kernel.apparmor_restrict_unprivileged_userns = 0'</span> | sudo tee /etc/sysctl.d/20-apparmor-donotrestrict.conf
</code></pre>
<ul>
<li>I reboot my system:</li>
</ul>
<pre><code class="lang-bash">reboot
</code></pre>
<ul>
<li>After the reboot, I return to a terminal and launch Docker Desktop:</li>
</ul>
<pre><code class="lang-bash">systemctl --user start docker-desktop
</code></pre>
<ul>
<li>From the Apps Drawer, I pin the Docker Desktop icon to the Dock.</li>
</ul>
<h3 id="heading-uninstall-docker-desktop">Uninstall Docker Desktop.</h3>
<ul>
<li>From the terminal, I remove Docker Desktop from my system:</li>
</ul>
<pre><code class="lang-bash">sudo apt remove docker-desktop
</code></pre>
<ul>
<li>I remove the configuration and data files:</li>
</ul>
<pre><code class="lang-bash">sudo apt remove docker-desktop &amp;&amp; sudo rm /usr/<span class="hljs-built_in">local</span>/bin/com.docker.cli &amp;&amp; sudo apt purge docker-desktop
</code></pre>
<hr />
<h2 id="heading-updating-blender">Updating Blender.</h2>
<p>Blender is a 3D modelling, rendering, animation, and simulation app.</p>
<ul>
<li>From the terminal, I update Blender:</li>
</ul>
<pre><code class="lang-bash">sudo apt purge -y --auto-remove blender &amp;&amp; sudo snap install blender --classic
</code></pre>
<h3 id="heading-removing-blender">Removing Blender.</h3>
<ul>
<li>I use the following command to remove Blender:</li>
</ul>
<pre><code class="lang-bash">sudo snap remove blender
</code></pre>
<hr />
<h2 id="heading-installing-vs-code">Installing VS Code.</h2>
<p>VS Code (Visual Studio Code) <strong>is</strong> a free, versatile code editor.</p>
<ul>
<li>From the terminal, I install VS Code:</li>
</ul>
<pre><code class="lang-bash">sudo snap install code --classic
</code></pre>
<h3 id="heading-updating-the-snap-installed-vs-code-editor">Updating the Snap-Installed VS Code Editor.</h3>
<p>I use the following command to update the Snap-Installed VS Code editor:</p>
<pre><code class="lang-bash">sudo snap refresh code --classic
</code></pre>
<h3 id="heading-removing-vs-code">Removing VS Code.</h3>
<ul>
<li>I use the following command to remove VS Code:</li>
</ul>
<pre><code class="lang-bash">sudo snap remove code
</code></pre>
<ul>
<li>This command removes the configurations for VS Code:</li>
</ul>
<pre><code class="lang-bash">sudo rm -R ~/.config/Code
</code></pre>
<ul>
<li>This command removes the installation directory:</li>
</ul>
<pre><code class="lang-bash">sudo rm -R ~/.vscode
</code></pre>
<h3 id="heading-printing-an-asciidoc-file-as-a-pdf">Printing an AsciiDoc File as a PDF.</h3>
<ul>
<li><p>I install the AsciiDoc extension from Asciidoctor for VS Code.</p>
</li>
<li><p>From the terminal, I install Ruby:</p>
</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y ruby-full
</code></pre>
<ul>
<li>I install the AsciiDoctor PDF Gem:</li>
</ul>
<pre><code class="lang-bash">sudo gem install asciidoctor-pdf
</code></pre>
<ul>
<li>From the terminal, I look for the path to the AsciiDoctor PDF Gem:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">which</span> asciidoctor-pdf
</code></pre>
<ul>
<li><p>Within VS Code, I open the <code>Settings</code> tab and search for <code>asciidoc-pdf</code>.</p>
</li>
<li><p>I set the <code>Asciidoc &gt; Pdf: Asciidoctor Pdf Command Path</code>:</p>
</li>
</ul>
<pre><code class="lang-bash">/usr/<span class="hljs-built_in">local</span>/bin/asciidoctor-pdf
</code></pre>
<blockquote>
<p>NOTE: The path might also be <code>/usr/bin/asciidoctor-pdf</code>.</p>
</blockquote>
<ul>
<li><p>I set the <code>Asciidoc &gt; Pdf: Engine</code> to <code>asciidoctor-pdf</code>.</p>
</li>
<li><p>I create an example AsciiDoc file called <code>example.ad</code>:</p>
</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-comment"># Test File</span>

This is a <span class="hljs-built_in">test</span> file.
</code></pre>
<ul>
<li><p>I open the Command Palette (CTRL + SHIFT + P) and search for <code>AsciiDoc: Export Document as PDF</code>.</p>
</li>
<li><p>I use this feature to generate a PDF file based on my my <code>example.ad</code> file.</p>
</li>
</ul>
<h3 id="heading-removing-ruby">Removing Ruby.</h3>
<ul>
<li>I use the following command to remove Ruby:</li>
</ul>
<pre><code class="lang-bash">sudo apt remove ruby -y
</code></pre>
<h3 id="heading-useful-vs-code-extensions-when-using-asciidoc">Useful VS Code Extensions when using AsciiDoc.</h3>
<p>Here is a list of other VS Code extensions I find useful when creating an <code>example.ad</code> file or generating a PDF file from the <code>example.ad</code> file:</p>
<ul>
<li><p>Code Spell Checker from Street Side Software let’s me check for spelling mistakes in my <code>example.ad</code> file, and</p>
</li>
<li><p>vscode-pdf from tomoki1207 let’s me view PDF files within VS Code. This is useful when I use AsciiDoc to generate PDF files because the view automatically updates on when I export an update to the <code>example.ad</code> content.</p>
</li>
</ul>
<hr />
<h2 id="heading-installing-spotify">Installing Spotify.</h2>
<p>Spotify is an app and streaming service.</p>
<ul>
<li>From the terminal, I install Spotify:</li>
</ul>
<pre><code class="lang-bash">sudo snap install spotify
</code></pre>
<h3 id="heading-removing-spotify">Removing Spotify.</h3>
<ul>
<li>I use the following command to remove Spotify:</li>
</ul>
<pre><code class="lang-bash">sudo snap remove spotify
</code></pre>
<hr />
<h2 id="heading-installing-screenkey">Installing Screenkey.</h2>
<p>Screenkey displays keystrokes on a monitor.</p>
<ul>
<li>From the terminal, I install Screenkey:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y screenkey
</code></pre>
<hr />
<h2 id="heading-installing-inkscape">Installing Inkscape.</h2>
<p>Inkscape is a vector-based image editor.</p>
<ul>
<li>From the terminal, I add the repo:</li>
</ul>
<pre><code class="lang-bash">sudo add-apt-repository ppa:inkscape.dev/stable
</code></pre>
<ul>
<li>I update and upgrade my system:</li>
</ul>
<pre><code class="lang-bash">sudo apt update &amp;&amp; sudo apt -y dist-upgrade
</code></pre>
<ul>
<li>I install Inkscape:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y inkscape
</code></pre>
<hr />
<h2 id="heading-installing-krita">Installing Krita.</h2>
<p>Krita is a pixel-based image editor.</p>
<ul>
<li>From the terminal, I install Krita:</li>
</ul>
<pre><code class="lang-bash">sudo snap install krita
</code></pre>
<h2 id="heading-installing-krita-manually">Installing Krita Manually.</h2>
<ul>
<li>From the terminal, I install the libfuse2 library:</li>
</ul>
<pre><code class="lang-bash">sudo apt install libfuse2
</code></pre>
<blockquote>
<p>NOTE: AppImages rely on FUSE (Filesystem in Userspace) to function properly.</p>
</blockquote>
<ul>
<li><p>From a browser, I download the AppImage file from the <a target="_blank" href="https://krita.org/en/download/">Krita.org</a> website.</p>
</li>
<li><p>From the file manager, I move the Krita app to it’s own directory.</p>
</li>
<li><p>I copy the Krita logo to the Krita directory:</p>
</li>
</ul>
<blockquote>
<p>NOTE: I downloaded the Krita PNG logo from the Internet.</p>
</blockquote>
<ul>
<li>From the Krita directory, I make the AppImage executable, for example:</li>
</ul>
<pre><code class="lang-bash">chmod +x krita-5.2.13-x86_64.AppImage
</code></pre>
<ul>
<li>I use the Nano text editor to create a desktop entry:</li>
</ul>
<pre><code class="lang-bash">nano ~/.<span class="hljs-built_in">local</span>/share/applications/krita.desktop
</code></pre>
<ul>
<li>I paste (CTRL + SHIFT + V) the following into the desktop entry, e.g.:</li>
</ul>
<pre><code class="lang-bash">[Desktop Entry]
Name=Krita
Exec=/media/brian/Downloads/Ubuntu/Krita/krita-5.2.13-x86_64.AppImage
Icon=/media/brian/Downloads/Ubuntu/Krita/krita-logo.png
Type=Application
Categories=Graphics;Images
</code></pre>
<ul>
<li><p>I save (CTRL + S) the changes, and exit (CTRL + X) the Nano text editor.</p>
</li>
<li><p>I create a symlink to start Krita from the terminal:</p>
</li>
</ul>
<pre><code class="lang-bash">sudo ln -s /media/brian/Downloads/Ubuntu/Krita/krita-5.2.13-x86_64.AppImage /usr/<span class="hljs-built_in">local</span>/bin/krita
</code></pre>
<ul>
<li><p>I open a new terminal.</p>
</li>
<li><p>From the new terminal, I run the Krita image editor:</p>
</li>
</ul>
<pre><code class="lang-bash">krita
</code></pre>
<ul>
<li>From the apps menu, I pin the Krita app to the Dash.</li>
</ul>
<hr />
<h2 id="heading-updating-the-firefox-browser">Updating the Firefox Browser.</h2>
<blockquote>
<p>NOTE: I can determine the type of Firefox installation (APT or Snap) by reading the <code>Help &gt; About Firefox</code> banner.</p>
</blockquote>
<ul>
<li>I use the following command to update the Snap-installed Firefox browser:</li>
</ul>
<pre><code class="lang-bash">sudo snap refresh firefox
</code></pre>
<ul>
<li>I use the following command to update the APT-installed Firefox browser:</li>
</ul>
<pre><code class="lang-bash">sudo apt update &amp;&amp; sudo apt upgrade firefox
</code></pre>
<hr />
<h2 id="heading-installing-the-brave-browser">Installing the Brave Browser.</h2>
<p>Brave is a Chromium-based web browser.</p>
<ul>
<li>From the terminal, I install the Brave browser:</li>
</ul>
<pre><code class="lang-bash">sudo curl -fsSLo /usr/share/keyrings/brave-browser-archive-keyring.gpg https://brave-browser-apt-release.s3.brave.com/brave-browser-archive-keyring.gpg &amp;&amp; <span class="hljs-built_in">echo</span> <span class="hljs-string">"deb [signed-by=/usr/share/keyrings/brave-browser-archive-keyring.gpg] https://brave-browser-apt-release.s3.brave.com/ stable main"</span>|sudo tee /etc/apt/sources.list.d/brave-browser-release.list &amp;&amp; sudo apt update &amp;&amp; sudo apt install -y brave-browser
</code></pre>
<hr />
<h2 id="heading-installing-rust">Installing Rust.</h2>
<p>Rust is a general-purpose, memory-safe, programming language.</p>
<ul>
<li>From the terminal, I install Rust:</li>
</ul>
<pre><code class="lang-bash">sudo apt install build-essential &amp;&amp; curl --proto <span class="hljs-string">'=https'</span> --tlsv1.3 https://sh.rustup.rs -sSf | sh &amp;&amp; <span class="hljs-built_in">source</span> <span class="hljs-string">"<span class="hljs-variable">$HOME</span>/.cargo/env"</span>
</code></pre>
<ul>
<li>I check to version:</li>
</ul>
<pre><code class="lang-bash">rustup --version
</code></pre>
<ul>
<li>I open the docs:</li>
</ul>
<pre><code class="lang-bash">rustup doc
</code></pre>
<h3 id="heading-uninstalling-rust">Uninstalling Rust.</h3>
<ul>
<li>I use the following command to uninstall Rust:</li>
</ul>
<pre><code class="lang-bash">rustup self uninstall
</code></pre>
<hr />
<h2 id="heading-installing-obs-studio">Installing OBS Studio.</h2>
<p>OBS Studio is a screen casting and streaming app.</p>
<ul>
<li>From the terminal, I install the repository:</li>
</ul>
<pre><code class="lang-bash">sudo add-apt-repository ppa:obsproject/obs-studio
</code></pre>
<ul>
<li>I update, and upgrade, my system:</li>
</ul>
<pre><code class="lang-bash">sudo apt update &amp;&amp; sudo apt -y upgrade
</code></pre>
<ul>
<li>I install OBS Studio:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y obs-studio
</code></pre>
<hr />
<h2 id="heading-installing-davinci-resolve-studio">Installing DaVinci Resolve Studio.</h2>
<p>DaVinci Resolve is a video editing, colour grading, and sound mixing app.</p>
<blockquote>
<p>NOTE: DaVinci Resolve Studio requires a user license.</p>
</blockquote>
<ul>
<li>I download the latest Linux copy of DaVinci Resolve 19 Studio:</li>
</ul>
<p><a target="_blank" href="https://www.blackmagicdesign.com/au/support/family/davinci-resolve-and-fusion">https://www.blackmagicdesign.com/au/support/family/davinci-resolve-and-fusion</a></p>
<ul>
<li>From the terminal, I install the following packages:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y libqt5x11extras5 libfuse2
</code></pre>
<ul>
<li>I go to the directory with the latest copy of DaVinci Resolve Studio:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> /media/brian/Downloads/Ubuntu/Blackmagic_Design/DaVinci_Resolve/20.2.3_Studio
</code></pre>
<ul>
<li>I extract the contents of the downloaded ZIP file:</li>
</ul>
<pre><code class="lang-bash">sudo unzip ./DaVinci_Resolve_*_Linux.zip
</code></pre>
<ul>
<li>I change the mode of the extracted RUN file to an executable:</li>
</ul>
<pre><code class="lang-bash">sudo chmod +x ./DaVinci_Resolve_*_Linux.run
</code></pre>
<ul>
<li>I install the unzipped <code>run</code> file:</li>
</ul>
<pre><code class="lang-bash">sudo SKIP_PACKAGE_CHECK=1 ./DaVinci_Resolve_*_Linux.run -i
</code></pre>
<blockquote>
<p>NOTE: The <code>SKIP_PACKAGE_CHECK=1</code> command bypasses the check system that looks for missing libraries during the installation.</p>
</blockquote>
<ul>
<li>I make a new directory called <code>disabled_libs</code>:</li>
</ul>
<pre><code class="lang-bash">sudo mkdir /opt/resolve/libs/disabled_libs
</code></pre>
<ul>
<li>I move the <code>libglib</code>, <code>libgio</code>, and <code>libgmodule</code> into the <code>disabled_libs</code> directory:</li>
</ul>
<pre><code class="lang-bash">sudo mv /opt/resolve/libs/libglib-2.0.so* /opt/resolve/libs/libgio-2.0.so* /opt/resolve/libs/libgmodule-2.0.so* /opt/resolve/libs/disabled_libs/
</code></pre>
<blockquote>
<p>NOTE: Moving these libraries forces Resolve to use the Ubuntu libraries.</p>
</blockquote>
<ul>
<li>I update my NVIDIA drivers, if required:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y nvidia-driver-580
</code></pre>
<blockquote>
<p>NOTE: Resolve requires the 550 drivers or later.</p>
</blockquote>
<ul>
<li>I run the application using the Desktop icon (in the Apps Drawer), or as a terminal session:</li>
</ul>
<pre><code class="lang-bash">/opt/resolve/bin/resolve
</code></pre>
<h3 id="heading-fixing-the-resolve-scaling-issue">Fixing the Resolve Scaling Issue.</h3>
<ul>
<li><p>Open a project.</p>
</li>
<li><p>Go to <code>DaVinci Resolve</code> (top left) &gt; <code>Preferences</code> &gt; <code>User</code>.</p>
</li>
<li><p>Set the <code>UI Display Scale</code> to <code>200%</code>.</p>
</li>
</ul>
<hr />
<h2 id="heading-installing-fusion-studio">Installing Fusion Studio.</h2>
<p>Fusion Studio is a visual effects, 3D animation, and motion graphics app.</p>
<blockquote>
<p>NOTE: Fusion Studio uses the same license that activates DaVinci Resolve Studio.</p>
</blockquote>
<ul>
<li>I download the latest Linux copy of Fusion Studio 19:</li>
</ul>
<p><a target="_blank" href="https://www.blackmagicdesign.com/au/support/family/davinci-resolve-and-fusion">https://www.blackmagicdesign.com/au/support/family/davinci-resolve-and-fusion</a></p>
<ul>
<li>I go to the directory with the latest copy of Fusion Studio:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> /media/brian/Downloads/Ubuntu/Blackmagic_Design/Fusion/20.2.3_Studio
</code></pre>
<ul>
<li>I extract the contents of the downloaded ZIP file:</li>
</ul>
<pre><code class="lang-bash">sudo unzip ./Blackmagic_Fusion_Studio_*.tar
</code></pre>
<ul>
<li>I change to the new sub-directory:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> ./Blackmagic_Fusion_Studio_20.2.3_Linux
</code></pre>
<ul>
<li>I change the mode of the extracted RUN file to an executable:</li>
</ul>
<pre><code class="lang-bash">sudo chmod +x ./Blackmagic_Fusion_Studio_*.run
</code></pre>
<ul>
<li>I install the unzipped <code>run</code> file:</li>
</ul>
<pre><code class="lang-bash">sudo SKIP_PACKAGE_CHECK=1 ./Blackmagic_Fusion_Studio_*_installer.run -i
</code></pre>
<ul>
<li>I update my NVIDIA drivers, if required:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y nvidia-driver-580
</code></pre>
<blockquote>
<p>NOTE: Fusion requires the 550 drivers or later.</p>
</blockquote>
<ul>
<li>I run the application using the Desktop icon (in the Apps Drawer), or as a terminal session:</li>
</ul>
<pre><code class="lang-bash">/opt/BlackmagicDesign/Fusion19/Fusion
</code></pre>
<hr />
<h2 id="heading-one-final-update">One Final Update.</h2>
<ul>
<li>From the terminal, I update my system one last time:</li>
</ul>
<pre><code class="lang-python">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<hr />
<h2 id="heading-the-results">The Results.</h2>
<p>Setting up a personalised Ubuntu 24.04 LTS system involves selecting and installing a variety of applications and utilities that cater to my specific needs. From essential software like package managers and partition tools to specialised applications for development, media, and productivity, each component plays a crucial role in enhancing my computing experience. By carefully choosing and configuring these tools, I can create a versatile and efficient environment that supports my daily tasks and long-term goals. I keep my system updated while exploring new tools that can further optimise my workflow.</p>
<hr />
<h2 id="heading-in-conclusion">In Conclusion.</h2>
<p>I’ve improved my Ubuntu experience with these essential apps &amp; utilities.</p>
<p>Over the past 10 years, I’ve discovered these must-have applications and utilities that let me tailor my system to my requirements. As a developer, a creative, and someone who loves a well-ordered digital workspace, this guide includes a minimum set of tools that I find essential.</p>
<p>From package managers like APT, Snap, and Flatpak, to partition managers like GNOME Disks and GParted, and the media utilities that ship in Ubuntu Studio, I have a standard set of tools that meets most of my daily needs.</p>
<p>With tools like Blender for 3D modelling, DaVinci Resolve Studio for video editing, and VS Code for app development, I can easily achieve any outcome that comes to mind.</p>
<p>I can stream my favourite tunes with Spotify while compiling new software using the Rust compiler. I can even enhance my app development with tools like Docker Desktop and Miniconda.</p>
<p>Keeping my system updated and exploring new tools will significantly optimise my workflow and support my daily tasks as well as my long-term goals.</p>
<p>What are your go-to applications? Share your favourites in the comments below!</p>
<p>Until next time: Be safe, be kind, be awesome.</p>
<hr />
<h2 id="heading-hash-tags">Hash Tags.</h2>
<p>#Ubuntu #Ubuntu-Studio #Linux #Open-Source #Productivity #Software-Development #Blender #DaVinci-Resolve-Studio #VS-Code #Docker #Spotify #Innovation #Tech-Community #Tech-Tools</p>
]]></content:encoded></item><item><title><![CDATA[DSPy: Install, Setup, and Test.]]></title><description><![CDATA[TL;DR.
Update: Saturday 18th February 2025.
This post is a guide to installing, setting up, and testing DSPy (Declarative Self-improving Python) and will cover the following topics:

The prerequisites for assembling a DSPy development environment,

T...]]></description><link>https://solodev.app/dspy-install-setup-and-test</link><guid isPermaLink="true">https://solodev.app/dspy-install-setup-and-test</guid><dc:creator><![CDATA[Brian King]]></dc:creator><pubDate>Fri, 10 Jan 2025 09:00:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1739830337181/5d8a5118-4c84-497b-9815-57cf08698c33.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-tldr">TL;DR.</h2>
<p><em>Update: Saturday 18th February 2025.</em></p>
<p>This post is a guide to installing, setting up, and testing DSPy (<strong><em>Declarative Self-improving Python</em></strong>) and will cover the following topics:</p>
<ul>
<li><p>The prerequisites for assembling a DSPy development environment,</p>
</li>
<li><p>The installation of Ollama and Miniconda,</p>
</li>
<li><p>The creation of a Conda environment for DSPy,</p>
</li>
<li><p>The installation of the DSPy framework,</p>
</li>
<li><p>The installation and setup up of Jupyter Notebook,</p>
</li>
<li><p>Running the test code in the newly assembled DSPy development environment, and</p>
</li>
<li><p>Running the test code using Python.</p>
</li>
</ul>
<p>The aim of this guide is to assemble, and test, a development environment for DSPy programming.</p>
<blockquote>
<p><strong>Attributions <em>↗</em>:</strong></p>
<p><a target="_blank" href="https://dspy.ai/tutorials/rag/">https://dspy.ai/tutorials/rag/</a> <strong><em>↗,</em></strong></p>
<p><a target="_blank" href="https://github.com/stanfordnlp/dspy">https://github.com/stanfordnlp/dspy</a> <strong><em>↗, and</em></strong></p>
<p><a target="_blank" href="https://dspy.ai/">https://dspy.ai/</a> <strong><em>↗.</em></strong></p>
</blockquote>
<h2 id="heading-an-introduction">An Introduction.</h2>
<p>DSPy is used to programmatically provide "a more systematic approach to solving hard tasks with advanced LMs." It exists to build simple Classifiers, sophisticated RAG pipelines, and Agentic processes.</p>
<blockquote>
<p>The purpose of this post is to provide a procedure for creating a DSPy programming environment.</p>
</blockquote>
<h2 id="heading-the-big-picture">The Big Picture.</h2>
<p>For classifiers, RAG (Retrieval-Augmented Generation), and agents, the main idea is to take advantage of advanced LM workflows, especially those systems that can retrieve relevant information while generating contextually accurate responses. By integrating retrieval mechanisms with generative models, these workflows aim to create intelligent systems that can autonomously navigate complex tasks, adapt to dynamic environments, and provide insightful solutions. This approach not only improves the accuracy and relevance of LM outputs but also empowers users to leverage AI for complex, problem-solving operations.</p>
<p>DSPy is a programmatic tool for creating advanced AI solutions that was originally designed to replace the clunky, and fragile, prompt engineering paradigm.</p>
<h2 id="heading-prerequisites">Prerequisites.</h2>
<ul>
<li>A Debian-based Linux distro (I use Ubuntu).</li>
</ul>
<h2 id="heading-updating-my-base-system">Updating my Base System.</h2>
<ul>
<li>From the (base) terminal, I update my (base) system:</li>
</ul>
<pre><code class="lang-python">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<blockquote>
<p>NOTE: The Ollama LLM manager is already installed on my (base) system.</p>
</blockquote>
<hr />
<h2 id="heading-what-is-ollama">What is Ollama?</h2>
<p>Ollama is a tool that is used to download, set up, and run large language models on a local PC. It lets me use powerful models like Llama 2 and Mistral on my personal computer. Ollama natively runs on Linux, macOS, and Windows (in preview).</p>
<h3 id="heading-installing-ollama">Installing Ollama.</h3>
<ul>
<li>From the terminal, I install Ollama:</li>
</ul>
<pre><code class="lang-bash">curl https://ollama.ai/install.sh | sh
</code></pre>
<ul>
<li>I list the LMs downloaded by Ollama:</li>
</ul>
<pre><code class="lang-bash">ollama list
</code></pre>
<ul>
<li>If the above command fails, I run Ollama as a background service:</li>
</ul>
<pre><code class="lang-bash">ollama serve &amp;
</code></pre>
<ul>
<li>If the following error shows when running the previous command, that means Ollama is <em>already</em> running as a background service:</li>
</ul>
<pre><code class="lang-bash">Error: listen tcp 127.0.0.1:11434: <span class="hljs-built_in">bind</span>: address already <span class="hljs-keyword">in</span> use
</code></pre>
<h3 id="heading-pulling-an-advanced-lm">Pulling an Advanced LM.</h3>
<ul>
<li>I pull an advanced LM:</li>
</ul>
<pre><code class="lang-bash">ollama pull deepseek-r1:14b
</code></pre>
<blockquote>
<p>NOTE: This model has 14B parameters, requires 9GB of VRAM, and runs on my RTX 3060 12GB GPU.</p>
</blockquote>
<hr />
<h2 id="heading-what-is-miniconda">What is Miniconda?</h2>
<p>Miniconda, a bootstrap version of Anaconda, is a virtual environment manager that is small, FREE, and also includes the conda package manager, Python, and other packages that are required or useful to a developer, like pip and zlib.</p>
<h3 id="heading-installing-miniconda">Installing Miniconda.</h3>
<ul>
<li>I make the Miniconda directory:</li>
</ul>
<pre><code class="lang-bash">mkdir -p ~/miniconda3
</code></pre>
<ul>
<li>I download the installation payload:</li>
</ul>
<pre><code class="lang-bash">wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
</code></pre>
<ul>
<li>I run the installation script:</li>
</ul>
<pre><code class="lang-bash">bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
</code></pre>
<ul>
<li>I remove the installation script:</li>
</ul>
<pre><code class="lang-bash">rm -rf ~/miniconda3/miniconda.sh
</code></pre>
<h3 id="heading-initialising-miniconda">Initialising Miniconda.</h3>
<ul>
<li>I initialize <code>Miniconda</code>:</li>
</ul>
<pre><code class="lang-bash">~/miniconda3/bin/conda init bash
</code></pre>
<h3 id="heading-updating-miniconda">Updating Miniconda.</h3>
<ul>
<li>I change the owner of the <code>Miniconda</code> directory to the current, logged-in account:</li>
</ul>
<pre><code class="lang-bash">sudo chown -R <span class="hljs-variable">$USER</span>:<span class="hljs-variable">$USER</span> <span class="hljs-variable">$HOME</span>/miniconda3
</code></pre>
<ul>
<li>I update <code>Miniconda</code>:</li>
</ul>
<pre><code class="lang-bash">conda update -n base -c defaults conda
</code></pre>
<h3 id="heading-using-conda-to-create-the-dspy-environment">Using Conda to Create the DSPy Environment.</h3>
<ul>
<li>I use <code>conda</code> to display a <code>list</code> of Miniconda <code>env</code>ironments:</li>
</ul>
<pre><code class="lang-bash">conda env list
</code></pre>
<ul>
<li>I use <code>conda</code> to <code>create</code>, and <code>activate</code>, a new environment named (-n) (DSPy):</li>
</ul>
<pre><code class="lang-bash">conda create -n DSPy python=3.11 -y &amp;&amp; conda activate DSPy
</code></pre>
<blockquote>
<p>NOTE: This command creates the (DSPy) environment, then activates the (DSPy) environment.</p>
</blockquote>
<h3 id="heading-creating-the-dspy-home-directory">Creating the <code>DSPy</code> Home Directory.</h3>
<blockquote>
<p>NOTE: I will define the home directory with settings in the environment directory.</p>
</blockquote>
<ul>
<li>I create the <code>DSPy</code> home directory:</li>
</ul>
<pre><code class="lang-bash">mkdir ~/DSPy
</code></pre>
<ul>
<li>I make new directories within the (DSPy) environment:</li>
</ul>
<pre><code class="lang-bash">mkdir -p ~/anaconda3/envs/DSPy/etc/conda/activate.d
</code></pre>
<ul>
<li>I use the Nano text editor to create the <code>set_working_directory.sh</code> shell script:</li>
</ul>
<pre><code class="lang-bash">sudo nano ~/anaconda3/envs/DSPy/etc/conda/activate.d/set_working_directory.sh
</code></pre>
<ul>
<li>I add the following, save (CTRL + S) the changes, and exit (CTRL + X) the Nano text editor:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> ~/DSPy
</code></pre>
<ul>
<li>I activate the (base) environment:</li>
</ul>
<pre><code class="lang-bash">conda activate
</code></pre>
<ul>
<li>I activate the (DSPy) environment:</li>
</ul>
<pre><code class="lang-bash">conda activate DSPy
</code></pre>
<blockquote>
<p>NOTE: I should now, by default, be in the <code>~/DSPy</code> home directory.</p>
</blockquote>
<hr />
<h2 id="heading-installing-dspy">Installing DSPy.</h2>
<ul>
<li>From the (DSPy) terminal, I install the latest version of DSPy:</li>
</ul>
<pre><code class="lang-bash">pip install git+https://github.com/stanfordnlp/dspy.git
</code></pre>
<ul>
<li>I can also use pip to install DSPy directly from PyPI:</li>
</ul>
<pre><code class="lang-bash">pip install -U dspy
</code></pre>
<hr />
<h2 id="heading-installing-jupyter-notebook">Installing Jupyter Notebook</h2>
<ul>
<li>From the DSPy terminal, I install Jupyter Notebook:</li>
</ul>
<pre><code class="lang-bash">pip install notebook
</code></pre>
<ul>
<li>I generate a Jupyter Notebook configuration file:</li>
</ul>
<pre><code class="lang-bash">jupyter notebook --generate-config
</code></pre>
<ul>
<li>I open the configuration file with the Nano text editor:</li>
</ul>
<pre><code class="lang-bash">sudo nano ~/.jupyter/jupyter_notebook_config.py
</code></pre>
<ul>
<li>I set a default browser*:</li>
</ul>
<pre><code class="lang-bash">c.ServerApp.browser = <span class="hljs-string">'/bin/brave-browser %s'</span>
</code></pre>
<ul>
<li><p>I save (CTRL + S) the changes and exit (CTRL + X) the Nano text editor,</p>
</li>
<li><p>I upgrade Jupyter Notebook:</p>
</li>
</ul>
<pre><code class="lang-bash">pip install --upgrade jupyter
</code></pre>
<ul>
<li>I upgrade a Jupyter Notebook dependency:</li>
</ul>
<pre><code class="lang-bash">pip install --upgrade ipywidgets
</code></pre>
<ul>
<li>I run the Jupyter Notebook on port 8091:</li>
</ul>
<pre><code class="lang-bash">jupyter notebook --port 8091
</code></pre>
<ul>
<li><p>In the file menu, I select "File &gt; New &gt; Notebook",</p>
</li>
<li><p>I choose the default "Python 3 (ipykernel)" kernel, select the "Always start the preferred kernel" tick box, and click the blue "Select" button,</p>
</li>
<li><p>In the file menu of the Notebook, I select "File &gt; Rename...",</p>
</li>
<li><p>I rename the Notebook as "hello-world.ipynb" and click the blue "Rename" button.</p>
</li>
</ul>
<hr />
<h2 id="heading-testing-the-dspy-environment">Testing the DSPy Environment.</h2>
<ul>
<li>In the “hello-world“ Notebook, I define the local model that is used by DSPy:</li>
</ul>
<pre><code class="lang-bash">import dspy

lm = dspy.LM(model=<span class="hljs-string">'ollama/deepseek-r1:14b'</span>)
dspy.configure(lm=lm)
</code></pre>
<blockquote>
<p>NOTE: Deepseek R1 uses chain-of-thought reasoning by default and, as a result, may take longer to compose a result. However, using this model results in a higher chance of generating a correct answer.</p>
</blockquote>
<ul>
<li>In a new cell, I declare a module that takes a <code>question</code> (of type <code>str</code>) as input and produces a <code>response</code> as an output:</li>
</ul>
<pre><code class="lang-bash">qa = dspy.Predict(<span class="hljs-string">'question: str -&gt; response: str'</span>)
response = qa(question=<span class="hljs-string">"What is Hello World?"</span>)

<span class="hljs-built_in">print</span>(response.response)
</code></pre>
<hr />
<h2 id="heading-running-the-code-with-python">Running the Code with Python.</h2>
<ul>
<li>From the (DSPy) terminal, I change to the DSPy directory:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> ~/DSPy
</code></pre>
<ul>
<li>I use the Nano text editor to create the <code>hello_world.py</code> file:</li>
</ul>
<pre><code class="lang-bash">sudo nano hello-world.py
</code></pre>
<ul>
<li>I add (CTRL + SHIFT + V) the following to the <code>hello_world.py</code> file:</li>
</ul>
<pre><code class="lang-bash">import dspy

lm = dspy.LM(model=<span class="hljs-string">'ollama/deepseek-r1:14b'</span>)
dspy.configure(lm=lm)

qa = dspy.Predict(<span class="hljs-string">'question: str -&gt; response: str'</span>)
response = qa(question=<span class="hljs-string">"What is Hello World?"</span>)

<span class="hljs-built_in">print</span>(response.response)
</code></pre>
<blockquote>
<p>NOTE: The code above is exactly the same as the combined cells from the Notebook.</p>
</blockquote>
<ul>
<li><p>I save (CTRL + S) the changes and exit (CTRL + X) the Nano text editor.</p>
</li>
<li><p>I run the following command:</p>
</li>
</ul>
<pre><code class="lang-bash">python3 hello_world.py
</code></pre>
<hr />
<h2 id="heading-advanced-testing-optional">Advanced Testing: OPTIONAL</h2>
<p>The previous “hello-world” test was to ensure the development software was installed correctly. <em>These</em> tests explore what tasks the <code>deepseek-r1:14b</code> LM can perform out of the box.</p>
<p>Also, these tests show a range of tasks that can be performed using DSPy programming techniques.</p>
<ul>
<li>At the (DSPy) terminal, I change to the DSPy directory:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> ~/DSPy
</code></pre>
<ul>
<li>I install SGLang:</li>
</ul>
<pre><code class="lang-bash">pip install <span class="hljs-string">"sglang[all]"</span>
</code></pre>
<blockquote>
<p>NOTE: <a target="_blank" href="https://docs.sglang.ai/index.html">SGLang</a> is a fast serving framework for large language models and vision language models.</p>
</blockquote>
<ul>
<li>I install FlashInfer:</li>
</ul>
<pre><code class="lang-bash">pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/
</code></pre>
<blockquote>
<p>NOTE: <a target="_blank" href="https://docs.flashinfer.ai/">FlashInfer</a> is a library and kernel generator for Large Language Models that provides high-performance implementation of LLM GPU kernels such as FlashAttention, PageAttention and LoRA.</p>
</blockquote>
<h3 id="heading-maths">Maths.</h3>
<ul>
<li>I run the Jupyter Notebook:</li>
</ul>
<pre><code class="lang-bash">jupyter notebook --port 8091
</code></pre>
<ul>
<li><p>In the file menu, I select "File &gt; New &gt; Notebook",</p>
</li>
<li><p>I choose the default "Python 3 (ipykernel)" kernel, select the "Always start the preferred kernel" tick box, and click the blue "Select" button,</p>
</li>
<li><p>In the file menu of the Notebook, I select "File &gt; Rename...",</p>
</li>
<li><p>I rename the Notebook as "maths.ipynb" and click the blue "Rename" button.</p>
</li>
<li><p>In the Notebook, I define the local model that is used by DSPy:</p>
</li>
</ul>
<pre><code class="lang-bash">import dspy

lm = dspy.LM(model=<span class="hljs-string">'ollama/deepseek-r1:14b'</span>)
dspy.configure(lm=lm)
</code></pre>
<ul>
<li>In a new cell, I add the following:</li>
</ul>
<pre><code class="lang-bash">maths = dspy.ChainOfThought(<span class="hljs-string">"question -&gt; answer: float"</span>)
maths(question=<span class="hljs-string">"Two dice are tossed. What is the probability that the sum equals two?"</span>)
</code></pre>
<blockquote>
<p>ANSWER: 0.0277778.</p>
</blockquote>
<h3 id="heading-rag">RAG.</h3>
<ul>
<li><p>In the Jupyter Notebook file menu, I select "File &gt; New &gt; Notebook",</p>
</li>
<li><p>I choose the default "Python 3 (ipykernel)" kernel, select the "Always start the preferred kernel" tick box, and click the blue "Select" button,</p>
</li>
<li><p>In the file menu of the Notebook, I select "File &gt; Rename...",</p>
</li>
<li><p>I rename the Notebook as "rag.ipynb" and click the blue "Rename" button.</p>
</li>
<li><p>In the Notebook, I define the local model that is used by DSPy:</p>
</li>
</ul>
<pre><code class="lang-bash">import dspy

lm = dspy.LM(model=<span class="hljs-string">'ollama/deepseek-r1:14b'</span>)
dspy.configure(lm=lm)
</code></pre>
<ul>
<li>In a new cell, I add the following:</li>
</ul>
<pre><code class="lang-bash">def search_wikipedia(query: str) -&gt; list[str]:
    results = dspy.ColBERTv2(url=<span class="hljs-string">'http://20.102.90.50:2017/wiki17_abstracts'</span>)(query, k=3)
    <span class="hljs-built_in">return</span> [x[<span class="hljs-string">'text'</span>] <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> results]

rag = dspy.ChainOfThought(<span class="hljs-string">'context, question -&gt; response'</span>)

question = <span class="hljs-string">"What's the name of the castle that David Gregory inherited?"</span>
rag(context=search_wikipedia(question), question=question)
</code></pre>
<blockquote>
<p>ANSWER: Kinnairdy Castle.</p>
</blockquote>
<h3 id="heading-classification">Classification.</h3>
<ul>
<li><p>In the Jupyter Notebook file menu, I select "File &gt; New &gt; Notebook",</p>
</li>
<li><p>I choose the default "Python 3 (ipykernel)" kernel, select the "Always start the preferred kernel" tick box, and click the blue "Select" button,</p>
</li>
<li><p>In the file menu of the Notebook, I select "File &gt; Rename...",</p>
</li>
<li><p>I rename the Notebook as "classification.ipynb" and click the blue "Rename" button.</p>
</li>
<li><p>In the Notebook, I define the local model that is used by DSPy:</p>
</li>
</ul>
<pre><code class="lang-bash">import dspy

lm = dspy.LM(model=<span class="hljs-string">'ollama/deepseek-r1:14b'</span>)
dspy.configure(lm=lm)
</code></pre>
<ul>
<li>In a new cell, I add the following:</li>
</ul>
<pre><code class="lang-bash">from typing import Literal

class Classify(dspy.Signature):
    <span class="hljs-string">""</span><span class="hljs-string">"Classify sentiment of a given sentence."</span><span class="hljs-string">""</span>

    sentence: str = dspy.InputField()
    sentiment: Literal[<span class="hljs-string">'positive'</span>, <span class="hljs-string">'negative'</span>, <span class="hljs-string">'neutral'</span>] = dspy.OutputField()
    confidence: <span class="hljs-built_in">float</span> = dspy.OutputField()

classify = dspy.Predict(Classify)
classify(sentence=<span class="hljs-string">"This book was super fun to read, though not the last chapter."</span>)
</code></pre>
<blockquote>
<p>ANSWER: sentiment='positive', confidence=0.75.</p>
</blockquote>
<h3 id="heading-information-extraction">Information Extraction.</h3>
<ul>
<li><p>In the Jupyter Notebook file menu, I select "File &gt; New &gt; Notebook",</p>
</li>
<li><p>I choose the default "Python 3 (ipykernel)" kernel, select the "Always start the preferred kernel" tick box, and click the blue "Select" button,</p>
</li>
<li><p>In the file menu of the Notebook, I select "File &gt; Rename...",</p>
</li>
<li><p>I rename the Notebook as "extraction.ipynb" and click the blue "Rename" button.</p>
</li>
<li><p>In the Notebook, I define the local model that is used by DSPy:</p>
</li>
</ul>
<pre><code class="lang-bash">import dspy

lm = dspy.LM(model=<span class="hljs-string">'ollama/deepseek-r1:14b'</span>)
dspy.configure(lm=lm)
</code></pre>
<ul>
<li>In a new cell, I add the following:</li>
</ul>
<pre><code class="lang-bash">class ExtractInfo(dspy.Signature):
    <span class="hljs-string">""</span><span class="hljs-string">"Extract structured information from text."</span><span class="hljs-string">""</span>

    text: str = dspy.InputField()
    title: str = dspy.OutputField()
    headings: list[str] = dspy.OutputField()
    entities: list[dict[str, str]] = dspy.OutputField(desc=<span class="hljs-string">"a list of entities and their metadata"</span>)

module = dspy.Predict(ExtractInfo)

text = <span class="hljs-string">"Apple Inc. announced its latest iPhone 14 today."</span> \
    <span class="hljs-string">"The CEO, Tim Cook, highlighted its new features in a press release."</span>
response = module(text=text)

<span class="hljs-built_in">print</span>(response.title)
<span class="hljs-built_in">print</span>(response.headings)
<span class="hljs-built_in">print</span>(response.entities)
</code></pre>
<blockquote>
<p>ANSWER: Apple Announces iPhone 14 ['Announcement Details', 'Press Release Highlights', 'New Features'] [{'EntityType': 'Company', 'Description': 'Technology company known for iPhones'}, {'EntityType': 'Person', 'Description': 'CEO of Apple'}]</p>
</blockquote>
<h3 id="heading-agents">Agents.</h3>
<ul>
<li><p>In the Jupyter Notebook file menu, I select "File &gt; New &gt; Notebook",</p>
</li>
<li><p>I choose the default "Python 3 (ipykernel)" kernel, select the "Always start the preferred kernel" tick box, and click the blue "Select" button,</p>
</li>
<li><p>In the file menu of the Notebook, I select "File &gt; Rename...",</p>
</li>
<li><p>I rename the Notebook as "agents.ipynb" and click the blue "Rename" button.</p>
</li>
<li><p>In the Notebook, I define the local model that is used by DSPy:</p>
</li>
</ul>
<pre><code class="lang-bash">import dspy

lm = dspy.LM(model=<span class="hljs-string">'ollama/deepseek-r1:14b'</span>)
dspy.configure(lm=lm)
</code></pre>
<ul>
<li>In a new cell, I add the following:</li>
</ul>
<pre><code class="lang-bash">def evaluate_math(expression: str):
    <span class="hljs-built_in">return</span> dspy.PythonInterpreter({}).execute(expression)

def search_wikipedia(query: str):
    results = dspy.ColBERTv2(url=<span class="hljs-string">'http://20.102.90.50:2017/wiki17_abstracts'</span>)(query, k=3)
    <span class="hljs-built_in">return</span> [x[<span class="hljs-string">'text'</span>] <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> results]

react = dspy.ReAct(<span class="hljs-string">"question -&gt; answer: float"</span>, tools=[evaluate_math, search_wikipedia])

pred = react(question=<span class="hljs-string">"What is 9362158 divided by the year of birth of David Gregory of Kinnairdy castle?"</span>)
<span class="hljs-built_in">print</span>(pred.answer)
</code></pre>
<blockquote>
<p>ANSWER: 5761.328</p>
</blockquote>
<hr />
<h2 id="heading-the-results">The Results.</h2>
<p>Setting up DSPy involves creating a robust, development environment where complex tasks can leverage the capabilities of advanced, and powerful, language models. By following the outlined process, from installing Ollama and Miniconda, to creating a dedicated Conda environment, while setting up a Jupyter Notebook, I can use the resulting environment to efficiently manage (and execute) DSPy programmes. The whole purpose of this post is to setup a local workspace for DSPy development. With DSPy, I will be equipped to explore, and implement, advanced LM capabilities that provide leading, innovative solutions.</p>
<hr />
<h2 id="heading-in-conclusion">In Conclusion.</h2>
<p>I am now ready to revolutionize my approach to complex tasks with advanced LMs. Meet DSPy (<strong><em>Declarative Self-improving Python</em></strong>) – my new best friend in systematic problem-solving!</p>
<p>Setting up DSPy is a breeze with a few essential steps. I start by installing Ollama and Miniconda, then I create a dedicated environment for DSPy.</p>
<p>Next, set up Jupyter Notebook to streamline my workflow and enhance productivity. With DSPy, I’m paving the way for innovative solutions across numerous domains.</p>
<p>By following this structured setup, I can efficiently manage and execute DSPy procedures, unlocking advanced LM capabilities.</p>
<p>Are you ready to explore the potential of DSPy in your projects? How do you plan to leverage advanced LMs for your next big idea? Let's discuss below!</p>
<p>Until next time: Be safe, be kind, be awesome.</p>
<hr />
<h2 id="heading-tags">Tags.</h2>
<p>#DSPy #Python #LanguageModels #Innovation #ProblemSolving</p>
]]></content:encoded></item><item><title><![CDATA[Happy New Year 2025.]]></title><description><![CDATA[Happy New Year, everyone.]]></description><link>https://solodev.app/happy-new-year-2025</link><guid isPermaLink="true">https://solodev.app/happy-new-year-2025</guid><dc:creator><![CDATA[Brian King]]></dc:creator><pubDate>Wed, 01 Jan 2025 10:46:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1735668391837/f6f04403-281a-4283-b3ae-354d4bd1c832.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Happy New Year, everyone.</p>
]]></content:encoded></item><item><title><![CDATA[Docker Desktop on Ubuntu 24.04 LTS.]]></title><description><![CDATA[TL;DR.
This post provides a comprehensive guide on installing and uninstalling Docker and Docker Desktop on Ubuntu 24.04. It covers the prerequisites, step-by-step installation process, and troubleshooting tips, ensuring a smooth setup for managing c...]]></description><link>https://solodev.app/docker-desktop-on-ubuntu-2404-lts</link><guid isPermaLink="true">https://solodev.app/docker-desktop-on-ubuntu-2404-lts</guid><category><![CDATA[Docker Uninstallation]]></category><category><![CDATA[Docker Tips]]></category><category><![CDATA[Docker]]></category><category><![CDATA[docker desktop]]></category><category><![CDATA[Ubuntu 24.04]]></category><category><![CDATA[containerization]]></category><category><![CDATA[Microservices]]></category><category><![CDATA[Linux]]></category><category><![CDATA[software development]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[docker installation]]></category><dc:creator><![CDATA[Brian King]]></dc:creator><pubDate>Tue, 31 Dec 2024 09:00:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1739828782754/b03502c9-e71a-4cec-9587-fe9aed9a3a02.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-tldr">TL;DR.</h1>
<p>This post provides a comprehensive guide on installing and uninstalling Docker and Docker Desktop on Ubuntu 24.04. It covers the prerequisites, step-by-step installation process, and troubleshooting tips, ensuring a smooth setup for managing containerized applications and microservices. The guide also highlights the benefits of using Docker Desktop for efficient software development.</p>
<blockquote>
<p><strong>Attributions:</strong></p>
<p><a target="_blank" href="https://www.docker.com/">https://www.docker.com/</a> <strong><em>↗.</em></strong></p>
</blockquote>
<hr />
<h1 id="heading-an-introduction">An Introduction.</h1>
<p>Docker and Docker Desktop are essential tools for modern software development. Whether you're a beginner or an experienced developer, this article will help you enhance your development workflow.</p>
<blockquote>
<p>The purpose of this post is to describe how to install Docker and Docker Desktop on Ubuntu 24.04.</p>
</blockquote>
<hr />
<h1 id="heading-the-big-picture">The Big Picture.</h1>
<p>Docker and Docker Desktop are pivotal to modern software development. The containerisation approach to coding enhances the portability and scalability of applications across different platforms, eliminating any “it runs on my system” obstacles.</p>
<hr />
<h1 id="heading-prerequisites">Prerequisites.</h1>
<ul>
<li>A Debian-based Linux distro (I use Ubuntu).</li>
</ul>
<hr />
<h1 id="heading-updating-my-base-system">Updating my Base System.</h1>
<ul>
<li>From the (base) terminal, I update my (base) system:</li>
</ul>
<pre><code class="lang-python">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<blockquote>
<p>NOTE: The Ollama LLM manager is already installed on my (base) system.</p>
</blockquote>
<hr />
<h1 id="heading-what-is-docker">What is Docker?</h1>
<p>Docker is used to containerise apps (and their dependencies) so they can run on any desktop and/or server operating system.</p>
<h2 id="heading-installing-docker">Installing Docker.</h2>
<blockquote>
<p><strong>Attribution:</strong></p>
<p><a target="_blank" href="https://linuxiac.com/how-to-install-docker-on-ubuntu-24-04-lts/">https://linuxiac.com/how-to-install-docker-on-ubuntu-24-04-lts/</a> <strong><em>↗.</em></strong></p>
</blockquote>
<ul>
<li>From the terminal, I add HTTPS and the Curl utility:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y apt-transport-https curl
</code></pre>
<ul>
<li>I import the Docker GPG repository key:</li>
</ul>
<pre><code class="lang-bash">curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
</code></pre>
<ul>
<li>I add the official Docker repository to my system:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"deb [arch=<span class="hljs-subst">$(dpkg --print-architecture)</span> signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu <span class="hljs-subst">$(. /etc/os-release &amp;&amp; echo <span class="hljs-string">"<span class="hljs-variable">$VERSION_CODENAME</span>"</span>)</span> stable"</span> | sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null
</code></pre>
<ul>
<li>I refresh my local repo list:</li>
</ul>
<pre><code class="lang-bash">sudo apt update
</code></pre>
<ul>
<li>I install Docker:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
</code></pre>
<ul>
<li>I check to see if Docker is active:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl is-active docker
</code></pre>
<ul>
<li>I test the installation:</li>
</ul>
<pre><code class="lang-bash">sudo docker run hello-world
</code></pre>
<h2 id="heading-uninstalling-docker">Uninstalling Docker.</h2>
<blockquote>
<p><strong>Attributions:</strong></p>
<p><a target="_blank" href="https://learnubuntu.com/uninstall-docker/">https://learnubuntu.com/uninstall-docker/</a> <strong><em>↗.</em></strong></p>
</blockquote>
<ul>
<li>From the terminal, I stop the running Docker containers:</li>
</ul>
<pre><code class="lang-bash">docker stop $(docker ps -a -q)
</code></pre>
<ul>
<li>I remove the Docker containers:</li>
</ul>
<pre><code class="lang-bash">docker rm $(docker ps -a -q)
</code></pre>
<ul>
<li>I remove the Docker images:</li>
</ul>
<pre><code class="lang-bash">docker rmi $(docker images -a -q)
</code></pre>
<ul>
<li>I prune the custom Docker networks:</li>
</ul>
<pre><code class="lang-bash">docker network prune
</code></pre>
<ul>
<li>I prune the Docker containers, networks, images, cache and volumes:</li>
</ul>
<pre><code class="lang-bash">docker system prune -a
</code></pre>
<ul>
<li>I purge every Docker package:</li>
</ul>
<pre><code class="lang-bash">sudo apt purge docker-* containerd.io --auto-remove
</code></pre>
<ul>
<li>I remove the Docker files:</li>
</ul>
<pre><code class="lang-bash">sudo rm -rf /var/lib/docker
</code></pre>
<ul>
<li>I remove the Docker group:</li>
</ul>
<pre><code class="lang-bash">sudo groupdel docker
</code></pre>
<ul>
<li>I remove the Docker socket:</li>
</ul>
<pre><code class="lang-bash">sudo rm -rf /var/run/docker.sock
</code></pre>
<ul>
<li>I remove Docker Compose:</li>
</ul>
<pre><code class="lang-bash">sudo rm -rf /usr/<span class="hljs-built_in">local</span>/bin/docker-compose &amp;&amp; sudo rm -rf /etc/docker &amp;&amp; sudo rm -rf ~/.docker
</code></pre>
<hr />
<h1 id="heading-what-is-docker-desktop">What is Docker Desktop?</h1>
<p>Docker Desktop is used to build, share, and run containerized applications and microservices.</p>
<h2 id="heading-installing-docker-desktop">Installing Docker Desktop.</h2>
<blockquote>
<p><strong>Attributions:</strong></p>
<p><a target="_blank" href="https://docs.docker.com/desktop/setup/install/linux/ubuntu/">https://docs.docker.com/desktop/setup/install/linux/ubuntu/</a> <strong><em>↗,</em></strong></p>
<p><a target="_blank" href="https://dev.to/chandrashekhar/docker-desktop-is-not-working-on-ubuntu-2404-lts--2kpa">https://dev.to/chandrashekhar/docker-desktop-is-not-working-on-ubuntu-2404-lts--2kpa</a> <strong><em>↗.</em></strong></p>
</blockquote>
<ul>
<li>I download the latest version of Docker Desktop:</li>
</ul>
<p><a target="_blank" href="https://docs.docker.com/desktop/release-notes/">https://docs.docker.com/desktop/release-notes/</a></p>
<ul>
<li>From the terminal, I change to the <code>Downloads</code> directory:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> ~/Downloads
</code></pre>
<ul>
<li>I install the DEB package:</li>
</ul>
<pre><code class="lang-bash">sudo apt install ./docker-desktop*.deb
</code></pre>
<blockquote>
<p>NOTE: Ignore the error message after installation.</p>
</blockquote>
<ul>
<li>I fix the permissions issue:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">'kernel.apparmor_restrict_unprivileged_userns = 0'</span> | sudo tee /etc/sysctl.d/20-apparmor-donotrestrict.conf
</code></pre>
<ul>
<li>I reboot my system:</li>
</ul>
<pre><code class="lang-bash">reboot
</code></pre>
<ul>
<li>After the reboot, I return to a terminal and launch Docker Desktop:</li>
</ul>
<pre><code class="lang-bash">systemctl --user start docker-desktop
</code></pre>
<ul>
<li>From the Apps Drawer, I pin the Docker Desktop icon to the Dock.</li>
</ul>
<h2 id="heading-uninstall-docker-desktop">Uninstall Docker Desktop.</h2>
<ul>
<li>From the terminal, I remove Docker Desktop from my system:</li>
</ul>
<pre><code class="lang-bash">sudo apt remove docker-desktop
</code></pre>
<ul>
<li>I remove the configuration and data files:</li>
</ul>
<pre><code class="lang-bash">sudo apt remove docker-desktop &amp;&amp; sudo rm /usr/<span class="hljs-built_in">local</span>/bin/com.docker.cli &amp;&amp; sudo apt purge docker-desktop
</code></pre>
<hr />
<h1 id="heading-the-results">The Results.</h1>
<p>Setting up Docker Desktop on Ubuntu 24.04 requires a series of steps that ensures a smooth installation process. By following the outlined procedures, I can effectively manage containerized applications and microservices on my system. Docker Desktop provides a user-friendly interface and powerful tools so I can build, share, and run applications efficiently.</p>
<hr />
<h1 id="heading-in-conclusion">In Conclusion.</h1>
<p>I have discovered how Docker Desktop can transform my workflow!</p>
<p>Docker is a game-changer, allowing me to containerize my apps and their dependencies, ensuring the app run seamlessly on any platform.</p>
<p>In guide guide, I walked through the steps of installing Docker and Docker Desktop on Ubuntu 24.04.</p>
<p>From updating my base system to fixing permissions issues, I've covered my simple installation process.</p>
<p>With Docker Desktop, I can easily build, share, and run containerized applications and microservices.</p>
<p>It's all about efficiency and user-friendly interfaces!</p>
<p>Curious about the uninstallation process? I've included that too, ensuring I have complete control over my rig.</p>
<p>By following these steps, I can effectively manage my containerized applications and microservices, making development a smoother and more efficient process.</p>
<p>Have you tried Docker Desktop on Ubuntu yet? What challenges did you face, and how did you overcome them? Let's discuss below!</p>
<p>Until next time: Be safe, be kind, be awesome.</p>
<hr />
<h1 id="heading-hash-tags">Hash Tags.</h1>
<p>#Docker #DockerDesktop #Ubuntu2404 #Containerization #Microservices #Linux #SoftwareDevelopment #DevOps #OpenSource #DockerInstallation #DockerUninstallation #DockerTips</p>
]]></content:encoded></item><item><title><![CDATA[Bolt.diy and Ollama on Ubuntu 24.04 LTS.]]></title><description><![CDATA[TL;DR.
Local code generation using the Bolt.diy agent and Ollama enables efficient development of full-stack applications directly from within a browser. By utilizing local LLMs, I can streamline workflows, enhance productivity, and maintain control ...]]></description><link>https://solodev.app/boltdiy-and-ollama-on-ubuntu-2404-lts</link><guid isPermaLink="true">https://solodev.app/boltdiy-and-ollama-on-ubuntu-2404-lts</guid><category><![CDATA[Web Development]]></category><category><![CDATA[AI]]></category><category><![CDATA[coding]]></category><category><![CDATA[Productivity]]></category><category><![CDATA[innovation]]></category><category><![CDATA[Tech Trends]]></category><dc:creator><![CDATA[Brian King]]></dc:creator><pubDate>Mon, 30 Dec 2024 09:00:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1739828225418/1b457073-70e3-414c-bdcd-441ff5d43c2a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-tldr">TL;DR.</h1>
<p>Local code generation using the <a target="_blank" href="https://bolt.diy">Bolt.diy</a> agent and Ollama enables efficient development of full-stack applications directly from within a browser. By utilizing local LLMs, I can streamline workflows, enhance productivity, and maintain control over my development environment. This approach simplifies building and deploying applications while providing access to advanced models and technologies, shaping the future of web development.</p>
<blockquote>
<p><strong>Attributions:</strong></p>
<p><a target="_blank" href="https://github.com/stackblitz-labs/bolt.diy">https://github.com/stackblitz-labs/bolt.diy</a> <strong><em>↗.</em></strong></p>
</blockquote>
<hr />
<h1 id="heading-an-introduction">An Introduction.</h1>
<p><a target="_blank" href="https://Bolt.diy">Bolt.diy</a> and <a target="_blank" href="https://ollama.com">Ollama</a> are two groundbreaking solutions that empower developers to create full-stack applications directly from our browsers. By leveraging local large language models (LLMs), these tools offer a seamless and efficient development experience. Discover how these tools can transform our workflow and keep us at the forefront of technological advancements.</p>
<blockquote>
<p>he purpose of this post is to describe how I install <a target="_blank" href="http://Bolt.diy">Bolt.diy</a> on my local system.</p>
</blockquote>
<hr />
<h1 id="heading-the-big-picture">The Big Picture.</h1>
<p>I want to highlight how <a target="_blank" href="http://Bolt.diy">Bolt.diy</a> and is transforming web development by enabling developers to efficiently build full-stack applications, from our browsers, using local large language models. The Big Picture is to show how streamlining our workflows enhances productivity while we also maintain control over our development environments.</p>
<hr />
<h1 id="heading-prerequisites">Prerequisites.</h1>
<ul>
<li><p>A Debian-based Linux distro (I use Ubuntu),</p>
</li>
<li><p>PNPM,</p>
</li>
<li><p>Ollama,</p>
</li>
<li><p>and Git.</p>
</li>
</ul>
<p>The <a target="_blank" href="https://solodev.app/apps-utilities-for-ubuntu-2404-lts">installation instructions</a> are available for these prerequisites.</p>
<blockquote>
<p>NOTE: Docker, NodeJS, and NPM are needed</p>
</blockquote>
<hr />
<h1 id="heading-updating-my-base-system">Updating my Base System.</h1>
<ul>
<li>From the (base) terminal, I update my (base) system:</li>
</ul>
<pre><code class="lang-python">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<blockquote>
<p>NOTE: The Ollama LLM manager is already installed on my (base) system.</p>
</blockquote>
<hr />
<h1 id="heading-what-is-pnpm">What is PNPM?</h1>
<p><a target="_blank" href="https://pnpm.io/">PNPM</a> ↗ (Performant Node Package Manager) is a JavaScript package manager, similar to NPM.</p>
<hr />
<h1 id="heading-what-is-ollama">What is Ollama?</h1>
<p><a target="_blank" href="https://ollama.com/"><strong>Ollama</strong></a> ↗ is a tool for downloading, setting up, and running LLMs (large language models). It lets me access powerful models like Llama and Qwen, and helps me run them on my local Linux, macOS, and Windows systems.</p>
<hr />
<h1 id="heading-what-is-git">What is Git?</h1>
<p>Git is a version control manager that tracks any changes that are made to the files in a directory and subdirectories.</p>
<hr />
<h1 id="heading-what-is-bolt">What is Bolt?</h1>
<p><a target="_blank" href="https://github.com/stackblitz-labs/bolt.diy"><strong>Bolt</strong></a> ↗ is a web development agent that allows me to prompt, run, edit, and deploy full-stack applications directly from my browser.</p>
<hr />
<h1 id="heading-downloading-the-models">Downloading the Models.</h1>
<ul>
<li>From the terminal, I use Ollama to pull the DeepSeek Coder V2 16b model:</li>
</ul>
<pre><code class="lang-bash">ollama pull deepseek-coder-v2
</code></pre>
<ul>
<li>I use Ollama to pull the Quen 2.5 Coder 14b model:</li>
</ul>
<pre><code class="lang-bash">ollama pull qwen2.5-coder:14b
</code></pre>
<ul>
<li>I use Ollama to pull the Quen 2.5 Coder 7b model:</li>
</ul>
<pre><code class="lang-bash">ollama pull qwen2.5-coder:7b
</code></pre>
<ul>
<li>I use Ollama to list the models:</li>
</ul>
<pre><code class="lang-bash">ollama list
</code></pre>
<hr />
<h1 id="heading-installing-the-bolt-agent">Installing the Bolt Agent.</h1>
<ul>
<li>From the terminal, I check to see if <code>/usr/local/bin</code> is in my $PATH:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-variable">$PATH</span> .
</code></pre>
<ul>
<li>I change to my home directory:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> ~
</code></pre>
<ul>
<li>I clone the Bolt repository:</li>
</ul>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://github.com/stackblitz-labs/bolt.diy.git
</code></pre>
<ul>
<li>I change to the Bolt directory:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> bolt.diy
</code></pre>
<h2 id="heading-editing-the-environment-file">Editing the Environment File.</h2>
<ul>
<li>I rename <code>.env.example</code> to <code>.env.local</code> (using the move command):</li>
</ul>
<pre><code class="lang-bash">mv .env.example .env.local
</code></pre>
<blockquote>
<p>NOTE: Ollama runs locally and doesn't need an API key.</p>
</blockquote>
<ul>
<li>I open the <code>.env.local</code> file in the Nano text editor:</li>
</ul>
<pre><code class="lang-bash">sudo nano .env.local
</code></pre>
<ul>
<li>I set the Ollama API base URL:</li>
</ul>
<pre><code class="lang-bash">OLLAMA_API_BASE_URL=http://localhost:11434
</code></pre>
<ul>
<li>I set the context length for the model being used:</li>
</ul>
<pre><code class="lang-bash">DEFAULT_NUM_CTX=8192
</code></pre>
<ul>
<li>I save (CTRL + S) the changes and exit (CTRL + X) the Nano text editor.</li>
</ul>
<hr />
<h1 id="heading-running-the-bolt-agent">Running the Bolt Agent.</h1>
<ul>
<li>From the terminal, I install PNPM (if required):</li>
</ul>
<pre><code class="lang-bash">sudo npm install -g pnpm
</code></pre>
<ul>
<li>I install the dependencies:</li>
</ul>
<pre><code class="lang-bash">pnpm install
</code></pre>
<ul>
<li>I start the Bolt agent:</li>
</ul>
<pre><code class="lang-bash">pnpm run dev
</code></pre>
<ul>
<li>I open the interface in a browser:</li>
</ul>
<pre><code class="lang-bash">http://localhost:5173/
</code></pre>
<blockquote>
<p>After the Bolt agent starts, there may be an error message. Simply refresh the page.</p>
</blockquote>
<hr />
<h1 id="heading-the-results">The Results.</h1>
<p>Local code generation using Bolt.diy and Ollama offers a powerful and efficient way to develop full-stack applications directly from my browser. By leveraging AI-powered tools and local LLMs, I can streamline my workflow, enhance productivity, and maintain control over my development environment. This approach not only simplifies the process of building and deploying applications but also ensures that I have access to cutting-edge models and technologies. As the landscape of web development continues to evolve, tools like Bolt.diy and Ollama will play crucial roles in shaping the future of coding and application development.</p>
<hr />
<h1 id="heading-in-conclusion">In Conclusion.</h1>
<p>Am I ready to revolutionize my web development process? Yes!! I want to discover how local code generation with Bolt.diy and Ollama can transform my workflow!</p>
<p>I can easily imagine developing full-stack applications directly from my browser, powered by AI and local LLMs. With Bolt.diy, I can prompt, run, edit, and deploy applications seamlessly, while Ollama ensures I have access to cutting-edge models like DeepSeek Coder V2 and Qwen 2.5 on my local system.</p>
<p>Here's what I need to get started: a Debian-based Linux distro, Git, Ollama, and PNPM. Once set up, I can pull powerful models and integrate them with the Bolt.diy agent for an efficient development experience.</p>
<p>By leveraging these tools, I can streamline my workflow, enhance productivity, and maintain control over my development environment. This approach not only simplifies building and deploying applications but also keeps me at the forefront of web development technology.</p>
<p>As the landscape of web development evolves, tools like Bolt.diy and Ollama are shaping the future of coding and application development.</p>
<p>Are you ready to take your app development process to the next level? How do you see AI-powered tools impacting your workflow? Let me know in the comments below.</p>
<p>Until next time: Be safe, be kind, be awesome.</p>
<hr />
<h1 id="heading-hash-tags">Hash Tags.</h1>
<p>#WebDevelopment #AI #Coding #Productivity #Innovation #TechTrends</p>
]]></content:encoded></item><item><title><![CDATA[Setting Up Git & GitHub on Ubuntu 24.04 LTS.]]></title><description><![CDATA[TL;DR.
Setting up Git and GitHub on a Debian-based Linux system involves several steps: updating my system, installing Git, installing and updating the GitHub CLI tool, configuring my identity, generating SSH keys, and pushing my local repositories t...]]></description><link>https://solodev.app/setting-up-git-github-on-ubuntu-2404-lts</link><guid isPermaLink="true">https://solodev.app/setting-up-git-github-on-ubuntu-2404-lts</guid><category><![CDATA[GitHubSetup]]></category><category><![CDATA[LinuxDevelopment]]></category><category><![CDATA[versioncontrol]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[#GitTutorial  ]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Ubuntu]]></category><category><![CDATA[Devops]]></category><category><![CDATA[DevOpsTools]]></category><category><![CDATA[coding]]></category><category><![CDATA[codinglife ]]></category><category><![CDATA[Programming basics]]></category><category><![CDATA[ProgrammingGuide]]></category><category><![CDATA[techtips]]></category><category><![CDATA[UbuntuTips]]></category><dc:creator><![CDATA[Brian King]]></dc:creator><pubDate>Thu, 12 Dec 2024 09:00:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1733702503983/c6615562-23c2-4b72-8bc3-d4b8a27c4688.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-tldr">TL;DR.</h1>
<p>Setting up Git and GitHub on a Debian-based Linux system involves several steps: updating my system, installing Git, installing and updating the GitHub CLI tool, configuring my identity, generating SSH keys, and pushing my local repositories to GitHub. This guide helps streamline my setup process so I can quickly manage, and secure, my code and projects.</p>
<blockquote>
<p><strong>Attributions:</strong></p>
<p><a target="_blank" href="https://docs.github.com/en/get-started/using-git/about-git">https://docs.github.com/en/get-started/using-git/about-git</a> <strong><em>↗.</em></strong></p>
</blockquote>
<hr />
<h1 id="heading-an-introduction">An Introduction.</h1>
<p>In today's fast-paced development environment, efficient version control and collaboration are crucial for success. Git and GitHub, leading tools for creating, hosting, and managing code repositories, offers powerful processes that streamline version control. These tools are perfect for <em>all</em> developers. Setting up Git and GitHub on Debian-based Linux systems (like Ubuntu) will significantly enhance my workflow. This walk through follows the essential steps, from installing Git and GitHub CLI tool, to configuring my identity, and securing my projects with SSH keys. This is how I setup my Git and GitHub environments so they can immediately support the development of my various projects.</p>
<blockquote>
<p>The purpose of this post is to cover the installation of Git and GitHub.</p>
</blockquote>
<hr />
<h1 id="heading-the-big-picture">The Big Picture.</h1>
<p>Version control is a system that records changes to a file, or set of files, over time so that I can recall specific versions, if required. It also allows multiple people to collaborate on a project, track changes, and revert to previous states, if necessary. This is particularly useful in software development, where version control helps manage source code changes, resolve conflicts, and maintain a history of modifications. Popular version control systems include Git, Subversion (SVN), and Mercurial.</p>
<hr />
<h1 id="heading-prerequisites">Prerequisites.</h1>
<ul>
<li>A Debian-based Linux distro (I use Ubuntu).</li>
</ul>
<hr />
<h1 id="heading-updating-my-base-system">Updating my Base System.</h1>
<ul>
<li>From the (base) terminal, I update my (base) system:</li>
</ul>
<pre><code class="lang-python">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<blockquote>
<p>NOTE: The Ollama LLM manager is already installed on my (base) system.</p>
</blockquote>
<hr />
<h1 id="heading-what-is-git">What is Git?</h1>
<p>Git is a version control manager created by Linus Torvalds in 2005 and, since then, this project has been maintained by Junio Hamano. This utility helps me track the changes I make to my code. Git is used by development team members, who collaborate on software projects, to merge their individual changes to a local, or remote, repository (repo) at the end of each day.</p>
<p><a target="_blank" href="https://git-scm.com/book/en/v2/Getting-Started-What-is-Git%3F">https://git-scm.com/book/en/v2/Getting-Started-What-is-Git?</a>↗.</p>
<h1 id="heading-what-is-github">What is GitHub?</h1>
<p>GitHub hosts a collection of remote repos where local changes to a project can be saved to this off-site location. These remote repos can either be public or private. Alternatives to GitHub include <a target="_blank" href="https://about.gitlab.com/">GitLab</a> and <a target="_blank" href="https://bitbucket.org/">Bitbucket</a>.</p>
<p><a target="_blank" href="https://docs.github.com/en/get-started/using-git/about-git">https://docs.github.com/en/get-started/using-git/about-git</a>↗.</p>
<hr />
<h1 id="heading-installing-the-git-utility">Installing the Git Utility.</h1>
<ul>
<li>From the terminal, I install Git:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y git-all
</code></pre>
<ul>
<li>I check the installation:</li>
</ul>
<pre><code class="lang-bash">git -v
</code></pre>
<hr />
<h1 id="heading-installing-the-github-cli-tool">Installing the GitHub CLI Tool.</h1>
<ul>
<li>From the terminal, I run the following command:</li>
</ul>
<pre><code class="lang-bash">(<span class="hljs-built_in">type</span> -p wget &gt;/dev/null || (sudo apt update &amp;&amp; sudo apt-get install wget -y)) \
&amp;&amp; sudo mkdir -p -m 755 /etc/apt/keyrings \
&amp;&amp; wget -qO- https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo tee /etc/apt/keyrings/githubcli-archive-keyring.gpg &gt; /dev/null \
&amp;&amp; sudo chmod go+r /etc/apt/keyrings/githubcli-archive-keyring.gpg \
&amp;&amp; <span class="hljs-built_in">echo</span> <span class="hljs-string">"deb [arch=<span class="hljs-subst">$(dpkg --print-architecture)</span> signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main"</span> | sudo tee /etc/apt/sources.list.d/github-cli.list &gt; /dev/null \
&amp;&amp; sudo apt update \
&amp;&amp; sudo apt install -y gh
</code></pre>
<hr />
<h1 id="heading-updating-the-github-cli-tool">Updating the GitHub CLI Tool.</h1>
<ul>
<li>I update my system:</li>
</ul>
<pre><code class="lang-bash">sudo apt update
</code></pre>
<ul>
<li>I update the GitHub CLI tool:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y gh
</code></pre>
<hr />
<h1 id="heading-adding-an-identity">Adding an Identity.</h1>
<ul>
<li>From the terminal, I add an email address to Git:</li>
</ul>
<pre><code class="lang-plaintext">git config --global user.email "me@example.com"
</code></pre>
<ul>
<li>I add my name:</li>
</ul>
<pre><code class="lang-plaintext">git config --global user.name "My Name"
</code></pre>
<hr />
<h1 id="heading-creating-a-new-local-repo">Creating a New, Local Repo.</h1>
<blockquote>
<p>NOTE: Repo is the common, short name for "Repository".</p>
</blockquote>
<ul>
<li><p>From the terminal, I navigate to where I want to create a new project directory.</p>
</li>
<li><p>I create a new project directory:</p>
</li>
</ul>
<pre><code class="lang-bash">md project
</code></pre>
<ul>
<li>I navigate into the project directory:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> project
</code></pre>
<ul>
<li>I initialise the directory as a local Git repo:</li>
</ul>
<pre><code class="lang-plaintext">git init
</code></pre>
<ul>
<li>I create a README.md file:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"# Project Name"</span> &gt;&gt; README.md
</code></pre>
<hr />
<h1 id="heading-staging-changes-for-the-local-repo">Staging Changes for the Local Repo.</h1>
<ul>
<li>From the terminal, I stage the README.md file for committing to the local repo:</li>
</ul>
<pre><code class="lang-bash">git add README.md
</code></pre>
<ul>
<li>I can also stage ALL the files that have changed since the last commit:</li>
</ul>
<pre><code class="lang-plaintext">git add -A
</code></pre>
<hr />
<h1 id="heading-committing-changes-to-the-local-repo">Committing Changes to the Local Repo.</h1>
<ul>
<li><p>From the terminal, I navigate to the project folder.</p>
</li>
<li><p>I commit any changes to the local repo, along with a message:</p>
</li>
</ul>
<pre><code class="lang-bash">git commit -m <span class="hljs-string">"First commit"</span>
</code></pre>
<hr />
<h1 id="heading-naming-the-local-repo">Naming the Local Repo.</h1>
<ul>
<li><p>From the terminal, I navigate to the project folder.</p>
</li>
<li><p>I name the "main" branch of the repo:</p>
</li>
</ul>
<pre><code class="lang-bash">git branch -M main
</code></pre>
<blockquote>
<p>NOTE: Some “old school” engineers still call it the master branch.</p>
</blockquote>
<hr />
<h1 id="heading-generating-ssh-keys">Generating SSH Keys.</h1>
<ul>
<li><p>From the terminal, I navigate to the project directory.</p>
</li>
<li><p>I generate an SSH key pair and save them to /home/$USER/.ssh/projectname:</p>
</li>
</ul>
<pre><code class="lang-bash">ssh-keygen -b 4096
</code></pre>
<ul>
<li>After generating the SSH keys, I start the ssh-agent:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">eval</span> <span class="hljs-string">"<span class="hljs-subst">$(ssh-agent -s)</span>"</span>
</code></pre>
<ul>
<li>Once the ssh-agent is running, I add my SSH private key to the ssh-agent:</li>
</ul>
<pre><code class="lang-bash">ssh-add ~/.ssh/projectname
</code></pre>
<hr />
<h1 id="heading-adding-the-ssh-public-key-to-github">Adding the SSH Public Key to GitHub</h1>
<ul>
<li><p>In a text editor, I open the SSH public key file "~/.ssh/projectname.pub".</p>
</li>
<li><p>I copy the contents of the SSH public key to the clipboard.</p>
</li>
<li><p>I login to my <a target="_blank" href="https://github.com/">GitHub.com</a> account.</p>
</li>
<li><p>In the top-right corner of the GitHub website, I click my icon and choose "Settings" from the drop-down menu.</p>
</li>
<li><p>In the left menu, I choose "SSH and GPG keys".</p>
</li>
<li><p>I click the green "New SSH key" button.</p>
</li>
<li><p>In the "Title" field, I add a descriptive label for the new key.</p>
</li>
<li><p>I paste the key into the "Key" field.</p>
</li>
<li><p>I click the green "Add SSH key" button.</p>
</li>
<li><p>If prompted, I confirm my GitHub password.</p>
</li>
<li><p>I return to the GitHub home page.</p>
</li>
<li><p>I create a remote repo that uses the project name.</p>
</li>
</ul>
<hr />
<h1 id="heading-pushing-the-local-repo-to-github">Pushing the Local Repo to GitHub.</h1>
<ul>
<li><p>From the terminal, I navigate to the project directory.</p>
</li>
<li><p>I set a new origin for GitHub:</p>
</li>
</ul>
<pre><code class="lang-bash">git remote add origin git@github.com:accountname/projectname.git
</code></pre>
<ul>
<li>I push the local repo to GitHub:</li>
</ul>
<pre><code class="lang-plaintext">git push -u origin main
</code></pre>
<hr />
<h1 id="heading-cloning-an-existing-repo">Cloning an Existing Repo.</h1>
<ul>
<li><p>I visit an existing project on the GitHub repo to confirm the account name and the project name.</p>
</li>
<li><p>From the terminal, I run the following Git command:</p>
</li>
</ul>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://github.com/accountname/projectname.git
</code></pre>
<ul>
<li>Alternatively, I can run the GitHub CLI command instead of the Git command:</li>
</ul>
<pre><code class="lang-bash">gh repo <span class="hljs-built_in">clone</span> accountname/projectname
</code></pre>
<blockquote>
<p>NOTE: This command will download the project from GitHub to the location where the terminal ran the command.</p>
</blockquote>
<hr />
<h1 id="heading-the-results">The Results.</h1>
<p>Setting up GitHub on a local, Debian-based Linux system (such as Ubuntu) involves several key steps. From installing and updating the GitHub CLI tool to configuring your identity and generating SSH keys, each step ensures a smooth workflow for managing your code repositories. By following this guide, I can efficiently create, commit, and push my local repositories to GitHub, as well as clone existing ones. This setup not only enhances collaboration but also secures my projects with SSH keys.</p>
<hr />
<h1 id="heading-in-conclusion">In Conclusion.</h1>
<p>I secure my development workflow with Git and GitHub.</p>
<p>Setting up Git and GitHub on a Debian-based Linux system (like Ubuntu) can seem daunting, but it is easier than people think!</p>
<p>By following the steps in this post, I can efficiently manage my code repositories, enhance collaboration, and secure my projects with SSH keys.</p>
<p>Have you set up Git and GitHub on your Linux system yet? What challenges did you face? Let's discuss!</p>
<p>Until next time: Be safe, be kind, be awesome.</p>
<hr />
<h1 id="heading-hash-tags">Hash Tags.</h1>
<p>#Git #GitHub #GitHubSetup #GitTutorial #VersionControl #Linux #LinuxDevelopment #Ubuntu #UbuntuTips #DevOps #DevOpsTools #Coding #CodingLife #ProgrammingBasics #ProgrammingGuide #TechTips #TechHowTo #SoftwareDevelopment #SoftwareEngineering #OpenSource #OpenSourceProjects</p>
]]></content:encoded></item><item><title><![CDATA[SSH Setup for Ubuntu 24.04 LTS.]]></title><description><![CDATA[TL;DR.
SSH Setup for Ubuntu 24.04 LTS involves installing and configuring OpenSSH Server on the remote server, generating an RSA key pair on the local system, and uploading the local, public key to the remote system for secure, password-less authenti...]]></description><link>https://solodev.app/ssh-setup-for-ubuntu-2404-lts</link><guid isPermaLink="true">https://solodev.app/ssh-setup-for-ubuntu-2404-lts</guid><category><![CDATA[Ubuntu 24.04]]></category><category><![CDATA[ssh]]></category><category><![CDATA[ssh-keygen]]></category><category><![CDATA[server management]]></category><category><![CDATA[remote server]]></category><category><![CDATA[remote work]]></category><category><![CDATA[cyber security]]></category><category><![CDATA[Secure Connection]]></category><category><![CDATA[passwordless authentication ]]></category><category><![CDATA[linux-setup]]></category><category><![CDATA[tech tips]]></category><category><![CDATA[Tech Guide ]]></category><dc:creator><![CDATA[Brian King]]></dc:creator><pubDate>Wed, 11 Dec 2024 09:00:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1733637189811/b3195f86-04fc-4c22-82a3-3c02d9b71844.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-tldr">TL;DR.</h1>
<p>SSH Setup for Ubuntu 24.04 LTS involves installing and configuring OpenSSH Server on the remote server, generating an RSA key pair on the local system, and uploading the local, public key to the remote system for secure, password-less authentication.</p>
<blockquote>
<p><strong>Attributions:</strong></p>
<p><a target="_blank" href="https://linuxconfig.org/quick-guide-to-enabling-ssh-on-ubuntu-24-04">https://linuxconfig.org/quick-guide-to-enabling-ssh-on-ubuntu-24-04</a> <strong><em>↗.</em></strong></p>
</blockquote>
<hr />
<h1 id="heading-an-introduction">An Introduction.</h1>
<p>Setting up SSH on Ubuntu 24.04 is a crucial task for anyone looking to manage remote servers securely and efficiently. SSH, or Secure Shell, provides a secure channel over an unsecured network, allowing remote command executions, remote server management, and file transfers. These steps will configure SSH on both my local system and remote server, ensuring a seamless and secure connection. This post establishes a robust SSH setup, enabling password-less authentication and enhancing my overall workflow.</p>
<blockquote>
<p>The purpose of this post is to describe security between systems and servers.</p>
</blockquote>
<hr />
<h1 id="heading-the-big-picture">The Big Picture.</h1>
<p>SSH is widely used in app development for securely accessing and managing remote servers, deploying applications, and transferring files. It allows developers to execute commands on remote machines, manage version control systems, and automate deployment processes, ensuring a secure and efficient workflow.</p>
<hr />
<h1 id="heading-prerequisites">Prerequisites.</h1>
<ul>
<li>Two Debian-based Linux distros (I use Ubuntu), either virtual or actual.</li>
</ul>
<hr />
<h1 id="heading-updating-my-base-system">Updating my Base System.</h1>
<ul>
<li>From the (base) terminal, I update my (base) system:</li>
</ul>
<pre><code class="lang-bash">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<blockquote>
<p>NOTE: The Ollama LLM manager is already installed on my (base) system.</p>
</blockquote>
<hr />
<h1 id="heading-what-is-ssh">What is SSH?</h1>
<p>The Secure Shell (SSH) protocol is a way to safely send commands to a computer over an unsecured network. SSH uses special codes to check and protect connections between devices. It also allows for tunnelling, or port forwarding, which lets data packets go through networks they couldn't before. SSH is often used to control servers from a distance, manage infrastructure, and transfer files. It lets administrators manage servers and devices remotely. Older methods like Telnet sent commands that anyone could see. But SSH uses a secure connection, which is why it's called Secure Shell.</p>
<p><a target="_blank" href="https://www.cloudflare.com/learning/access-management/what-is-ssh/">https://www.cloudflare.com/learning/access-management/what-is-ssh/</a><strong><em>↗.</em></strong></p>
<h2 id="heading-installing-ssh-on-the-client">Installing SSH on the Client.</h2>
<p>SSH is used to safely send commands to a computer over an unsecured network.</p>
<ul>
<li>From the terminal, I install SSH:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y ssh
</code></pre>
<ul>
<li>I make a copy of the existing SSH settings:</li>
</ul>
<pre><code class="lang-bash">sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.bak
</code></pre>
<ul>
<li>I open the SSH configuration file in the Nano text editor:</li>
</ul>
<pre><code class="lang-bash">sudo nano /etc/ssh/sshd_config
</code></pre>
<ul>
<li><p>I change the following directives:</p>
<ul>
<li><p>LoginGraceTime 20</p>
</li>
<li><p>PermitRootLogin no</p>
</li>
<li><p>MaxAuthTries 3</p>
</li>
<li><p>⋮</p>
</li>
<li><p>PasswordAuthentication no</p>
</li>
</ul>
</li>
<li><p>I save the changes (CTRL + S) and exit (CTRL + X) the Nano text editor.</p>
</li>
<li><p>I create a privilege separation directory:</p>
</li>
</ul>
<pre><code class="lang-bash">sudo mkdir /run/sshd
</code></pre>
<ul>
<li>I test the SSH configuration for syntax errors:</li>
</ul>
<pre><code class="lang-bash">sudo sshd -t
</code></pre>
<blockquote>
<p>NOTE: If I do not get any output after running this command, then the changes I made are using valid syntax.</p>
</blockquote>
<ul>
<li>I reload the daemon:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl daemon-reload
</code></pre>
<ul>
<li>I restart the SSH socket:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl restart ssh.socket
</code></pre>
<ul>
<li>I check the status of the SSH socket:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl status ssh.socket
</code></pre>
<blockquote>
<p>NOTE: SSH is running when the Active status is active (listening).</p>
</blockquote>
<ul>
<li>I exit the status command (CTRL + C).</li>
</ul>
<h2 id="heading-creating-an-rsa-key-pair-on-the-client">Creating an RSA Key Pair on the Client.</h2>
<ul>
<li>From the terminal, I start the ssh-agent:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">eval</span> <span class="hljs-string">"<span class="hljs-subst">$(ssh-agent -s)</span>"</span>
</code></pre>
<ul>
<li>I generate a pair of RSA (Rivest-Shami-Adleman) keys and call the <code>/home/brian/.ssh/key-name</code> (where I replace "key-name" with the name of the remote server):</li>
</ul>
<pre><code class="lang-bash">ssh-keygen -b 4096
</code></pre>
<blockquote>
<p>NOTE: It is my convention to name the RSA keys after the remote server on which they will be applied.</p>
</blockquote>
<ul>
<li>I add the SSH private key to my local system:</li>
</ul>
<pre><code class="lang-bash">ssh-add /home/brian/.ssh/key-name
</code></pre>
<hr />
<h2 id="heading-uploading-the-client-public-key-to-the-server">Uploading the Client Public Key to the Server.</h2>
<ul>
<li>From the <code>local</code> terminal (<code>CTRL</code> + <code>ALT</code> + <code>T</code>), I use the <code>ssh-copy-id</code> command to upload the locally-generated public key to the remote server:</li>
</ul>
<pre><code class="lang-bash">ssh-copy-id -i /home/brian/.ssh/key-name.pub account-name@192.168.?.?
</code></pre>
<blockquote>
<p>NOTE: I replace the "?" with the actual IP address of the remote server.</p>
</blockquote>
<h2 id="heading-using-rsa-to-connect-to-the-server">Using RSA to Connect to the Server.</h2>
<ul>
<li>From the <code>local</code> terminal (<code>CTRL</code> + <code>ALT</code> + <code>T</code>), I connect to the remote server:</li>
</ul>
<pre><code class="lang-bash">ssh <span class="hljs-string">'user-account@192.168.?.?'</span>
</code></pre>
<h2 id="heading-preparing-the-server">Preparing the Server.</h2>
<ul>
<li>From the <code>remote</code> terminal (<code>CTRL</code> + <code>ALT</code> + <code>T</code>), I update the remote server:</li>
</ul>
<pre><code class="lang-bash">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<ul>
<li>I install OpenSSH Server:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y ssh
</code></pre>
<ul>
<li>I set the SSH Server to start on boot:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl <span class="hljs-built_in">enable</span> ssh
</code></pre>
<ul>
<li>I check the status of the SSH Server:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl status ssh
</code></pre>
<ul>
<li>I start the SSH Server if it is not running:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl start ssh
</code></pre>
<ul>
<li>I add an entry to the UFW (Uncomplicated Firewall):</li>
</ul>
<pre><code class="lang-bash">sudo ufw allow ssh
</code></pre>
<ul>
<li>I set the UFW to start on boot:</li>
</ul>
<pre><code class="lang-bash">sudo ufw <span class="hljs-built_in">enable</span>
</code></pre>
<hr />
<h1 id="heading-basic-server-security">Basic Server Security.</h1>
<p>Security is an attitude, not a process.</p>
<h2 id="heading-checking-the-unattended-upgrades-settings">Checking the Unattended Upgrades Settings.</h2>
<ul>
<li>From the terminal, I install unattended upgrades:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y unattended-upgrades
</code></pre>
<ul>
<li>I check if the upgrades are properly configured:</li>
</ul>
<pre><code class="lang-bash">cat /etc/apt/apt.conf.d/20auto-upgrades
</code></pre>
<blockquote>
<p>NOTE: The system will automatically update the package lists and perform unattended upgrades each day as long as Update-Package-Lists and Unattended-Upgrade are both set to ‘1’.</p>
</blockquote>
<h2 id="heading-configuring-ufw">Configuring UFW.</h2>
<ul>
<li>From the terminal, I check the UFW status:</li>
</ul>
<pre><code class="lang-bash">sudo ufw status verbose
</code></pre>
<ul>
<li>I allow SSH:</li>
</ul>
<pre><code class="lang-bash">sudo ufw allow ssh
</code></pre>
<ul>
<li>I allow port 80:</li>
</ul>
<pre><code class="lang-bash">sudo ufw allow 80/tcp
</code></pre>
<ul>
<li>I allow port 443:</li>
</ul>
<pre><code class="lang-bash">sudo ufw allow 443/tcp
</code></pre>
<ul>
<li>I enable UFW:</li>
</ul>
<pre><code class="lang-bash">sudo ufw <span class="hljs-built_in">enable</span>
</code></pre>
<blockquote>
<p>NOTE: When setting up a remote server, I ensure SSH is setup on both the client and the server before enabling UFW.</p>
</blockquote>
<ul>
<li>I check the UFW status:</li>
</ul>
<pre><code class="lang-bash">sudo ufw status verbose
</code></pre>
<blockquote>
<p>NOTE: There are <a target="_blank" href="https://solodev.app/3-of-3-hardening-the-remote-container#heading-enabling-and-setting-up-ufw">other UFW commands</a> that may be useful.</p>
</blockquote>
<h2 id="heading-installing-fail2ban">Installing Fail2Ban.</h2>
<ul>
<li>From the terminal, I install Fail2Ban:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y fail2ban
</code></pre>
<ul>
<li>I copy the <code>jail.conf</code> file as <code>jail.local</code>:</li>
</ul>
<pre><code class="lang-bash">sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
</code></pre>
<ul>
<li>I open the <code>jail.local</code> file in the Nano text editor:</li>
</ul>
<pre><code class="lang-bash">sudo nano /etc/fail2ban/jail.local
</code></pre>
<ul>
<li>I change a few (SSH-centric) settings in the <code>jail.local</code> file:</li>
</ul>
<pre><code class="lang-bash">[DEFAULT]
⋮
bantime = 1d
maxretry = 3
⋮
[sshd]
enabled = <span class="hljs-literal">true</span>
port = ssh,22
</code></pre>
<ul>
<li><p>I save (CTRL + S) the changes, and exit (CTRL + X) the Nano text editor.</p>
</li>
<li><p>I restart Fail2Ban:</p>
</li>
</ul>
<pre><code class="lang-bash">sudo systemctl restart fail2ban
</code></pre>
<ul>
<li>I check the status of Fail2Ban:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl status fail2ban
</code></pre>
<ul>
<li>I enable Fail2Ban to autostart on boot:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl <span class="hljs-built_in">enable</span> fail2ban
</code></pre>
<h2 id="heading-checking-the-apparmor-status">Checking the AppArmor Status.</h2>
<p>AppArmor, a Linux kernel security module, restricts per-program application capabilities like network access, raw socket access, and the permissions to read, write, or execute files.</p>
<ul>
<li>From the terminal, I check the AppArmor status:</li>
</ul>
<pre><code class="lang-bash">sudo aa-status --verbose
</code></pre>
<h2 id="heading-installing-the-lynis-auditing-tool">Installing the Lynis Auditing Tool.</h2>
<p>Lynis is a flexible security auditing tool for systems running Linux, FreeBSD, macOS, OpenBSD, Solaris, and other Unix-like operating systems, helping administrators and security professionals scan and strengthen system security.</p>
<ul>
<li>From the terminal, I install Lynis:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y lynis
</code></pre>
<ul>
<li>I check the installation:</li>
</ul>
<pre><code class="lang-bash">lynis -V
</code></pre>
<ul>
<li>I run a basic scan:</li>
</ul>
<pre><code class="lang-bash">sudo lynis audit system --quick
</code></pre>
<blockquote>
<p>NOTE: The log file, which is purged every scan, is <code>/var/log/lynis.log</code> and the report file is <code>/var/log/lynis-report.dat</code>.</p>
</blockquote>
<ul>
<li>I check if Lynis is up-to-date:</li>
</ul>
<pre><code class="lang-bash">lynis update check
</code></pre>
<blockquote>
<p><strong>Attribution:</strong></p>
<p><a target="_blank" href="https://cisofy.com/lynis/">https://cisofy.com/lynis/</a> <strong><em>↗.</em></strong></p>
</blockquote>
<hr />
<h1 id="heading-the-results">The Results.</h1>
<p>Setting up SSH on Ubuntu 24.04 LTS is an essential process for ensuring secure and efficient remote server management. By following the steps outlined, I can establish a robust SSH configuration that enhances my workflow with password-less authentication. This setup not only secures my connections but also streamlines my ability to manage and deploy applications remotely. I keep my systems updated and maintain security best practices to protect my infrastructure.</p>
<hr />
<h1 id="heading-in-conclusion">In Conclusion.</h1>
<p>Discover how I set up SSH on Ubuntu 24.04 LTS for secure, password-less authentication!</p>
<p>Setting up SSH is crucial for anyone managing remote servers. It provides a secure channel over unsecured networks, allowing for remote command executions, server management, and file transfers.</p>
<p>Here's a quick overview of the process:</p>
<ol>
<li><p><strong>Update My System</strong>: Ensure my base system is up-to-date for optimal performance.</p>
</li>
<li><p><strong>Generate RSA Key Pair</strong>: Create a secure key pair on my local system to facilitate password-less authentication.</p>
</li>
<li><p><strong>Upload Public Key</strong>: Use <code>ssh-copy-id</code> to transfer my public key to the remote system.</p>
</li>
<li><p><strong>Install OpenSSH Server</strong>: On the remote system, install and configure OpenSSH Server for secure connections.</p>
</li>
<li><p><strong>Configure Firewall</strong>: Set up UFW to allow SSH connections and ensure it starts on boot.</p>
</li>
</ol>
<p>By following these steps, I can establish a robust SSH setup that enhances my workflow and secures my connections. I keep my systems updated and adhere to security best practices.</p>
<p>How do you ensure secure remote server management in your workflow? Share your tips below!</p>
<p>Until next time: Be safe, be kind, be awesome.</p>
<hr />
<h1 id="heading-hash-tags">Hash Tags.</h1>
<p>#Ubuntu-2404 #SSH #SSH-Keygen #ServerManagement #RemoteServer #RemoteWork #CyberSecurity #SecureConnection #PasswordlessAuthentication #LinuxSetup #TechTips #TechGuide</p>
]]></content:encoded></item><item><title><![CDATA[Security and Hardening for Ubuntu 24.04 LTS.]]></title><description><![CDATA[TL;DR.
This post provides a comprehensive guide to securing and hardening an Ubuntu 24.04 LTS system. It covers essential steps such as updating the system, configuring unattended upgrades, setting up a firewall with UFW, installing Fail2Ban to preve...]]></description><link>https://solodev.app/security-and-hardening-for-ubuntu-2404-lts</link><guid isPermaLink="true">https://solodev.app/security-and-hardening-for-ubuntu-2404-lts</guid><category><![CDATA[Ubuntu Security]]></category><category><![CDATA[System Security]]></category><category><![CDATA[Lynis]]></category><category><![CDATA[linux-hardening]]></category><category><![CDATA[cyber security]]></category><category><![CDATA[fail2ban]]></category><category><![CDATA[ufw]]></category><category><![CDATA[AppArmor]]></category><category><![CDATA[stay-secure]]></category><category><![CDATA[Ubuntu 24.04 LTS]]></category><category><![CDATA[techtips]]></category><dc:creator><![CDATA[Brian King]]></dc:creator><pubDate>Sun, 08 Dec 2024 11:00:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1733308461870/d5e6ce25-f4e4-4aee-8ee3-8ac07141dbb0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-tldr">TL;DR.</h1>
<p>This post provides a comprehensive guide to securing and hardening an Ubuntu 24.04 LTS system. It covers essential steps such as updating the system, configuring unattended upgrades, setting up a firewall with UFW, installing Fail2Ban to prevent brute-force attacks, checking AppArmor status for application permissions, and using the Lynis auditing tool for vulnerability assessments. Regular updates and reviews of these configurations are emphasized to maintain system security and resilience against potential threats.</p>
<blockquote>
<p><strong>Attributions:</strong></p>
<p><strong><em>None ↗.</em></strong></p>
</blockquote>
<hr />
<h1 id="heading-an-introduction">An Introduction.</h1>
<p>In an era where cyber threats are increasingly sophisticated, securing an Ubuntu system is more important than ever. This guide walks through the essential steps to harden my system, from updating and automating upgrades to configuring firewalls and using security tools like Fail2Ban, AppArmor, and Lynis. By following these practices, I can enhance your system's resilience against potential threats and ensure my data remains protected.</p>
<blockquote>
<p>The purpose of this post is to promote simple, cybersecurity practices.</p>
</blockquote>
<hr />
<h1 id="heading-the-big-picture">The Big Picture.</h1>
<p>Cybersecurity is the practice of protecting systems, networks, and programs from digital attacks. These cyberattacks are usually aimed at accessing, changing, or destroying sensitive information, extorting money from users, or interrupting normal business processes. As technology evolves, so do the tactics of cybercriminals, making it crucial for individuals and organizations to implement robust security measures. This involves a combination of technology, processes, and practices designed to safeguard networks, devices, programs, and data from attack, damage, or unauthorized access. A comprehensive cybersecurity strategy includes regular updates, threat assessments, and user education to ensure resilience against potential threats.</p>
<hr />
<h1 id="heading-prerequisites">Prerequisites.</h1>
<ul>
<li>A Debian-based Linux distro (I use Ubuntu).</li>
</ul>
<hr />
<h1 id="heading-updating-my-base-system">Updating my Base System.</h1>
<ul>
<li>From the (base) terminal, I update my (base) system:</li>
</ul>
<pre><code class="lang-python">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<blockquote>
<p>NOTE: The Ollama LLM manager is already installed on my (base) system.</p>
</blockquote>
<hr />
<h1 id="heading-checking-the-unattended-upgrades-settings">Checking the Unattended Upgrades Settings.</h1>
<ul>
<li>From the terminal, I install unattended upgrades:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y unattended-upgrades
</code></pre>
<ul>
<li>I check if the upgrades are properly configured:</li>
</ul>
<pre><code class="lang-bash">cat /etc/apt/apt.conf.d/20auto-upgrades
</code></pre>
<blockquote>
<p>NOTE: The system will automatically update the package lists and perform unattended upgrades each day as long as Update-Package-Lists and Unattended-Upgrade are both set to ‘1’.</p>
</blockquote>
<hr />
<h1 id="heading-configuring-ufw">Configuring UFW.</h1>
<ul>
<li>From the terminal, I check the UFW status:</li>
</ul>
<pre><code class="lang-bash">sudo ufw status verbose
</code></pre>
<ul>
<li>I allow SSH:</li>
</ul>
<pre><code class="lang-bash">sudo ufw allow ssh
</code></pre>
<ul>
<li>I allow port 80:</li>
</ul>
<pre><code class="lang-bash">sudo ufw allow 80/tcp
</code></pre>
<ul>
<li>I allow port 443:</li>
</ul>
<pre><code class="lang-bash">sudo ufw allow 443/tcp
</code></pre>
<ul>
<li>I enable UFW:</li>
</ul>
<pre><code class="lang-bash">sudo ufw <span class="hljs-built_in">enable</span>
</code></pre>
<blockquote>
<p>NOTE: When setting up a remote server, ensure SSH is setup on both the client and the server before enabling UFW.</p>
</blockquote>
<ul>
<li>I check the UFW status:</li>
</ul>
<pre><code class="lang-bash">sudo ufw status verbose
</code></pre>
<blockquote>
<p>NOTE: There are <a target="_blank" href="https://solodev.app/3-of-3-hardening-the-remote-container#heading-enabling-and-setting-up-ufw">other UFW commands</a> that may be useful.</p>
</blockquote>
<hr />
<h1 id="heading-installing-fail2ban">Installing Fail2Ban.</h1>
<ul>
<li>From the terminal, I install Fail2Ban:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y fail2ban
</code></pre>
<ul>
<li>I copy the <code>jail.conf</code> file as <code>jail.local</code>:</li>
</ul>
<pre><code class="lang-bash">sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
</code></pre>
<ul>
<li>I open the <code>jail.local</code> file in the Nano text editor:</li>
</ul>
<pre><code class="lang-bash">sudo nano /etc/fail2ban/jail.local
</code></pre>
<ul>
<li>I change a few (SSH-centric) settings in the <code>jail.local</code> file:</li>
</ul>
<pre><code class="lang-bash">[DEFAULT]
⋮
bantime = 1d
maxretry = 3
⋮
[sshd]
enabled = <span class="hljs-literal">true</span>
port = ssh,22
</code></pre>
<ul>
<li><p>I save (CTRL + S) the configuration changes, and exit (CTRL + X) the Nano text editor.</p>
</li>
<li><p>I restart Fail2Ban:</p>
</li>
</ul>
<pre><code class="lang-bash">sudo systemctl restart fail2ban
</code></pre>
<ul>
<li>I check the status of Fail2Ban:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl status fail2ban
</code></pre>
<ul>
<li>I enable Fail2Ban to autostart on boot:</li>
</ul>
<pre><code class="lang-bash">sudo systemctl <span class="hljs-built_in">enable</span> fail2ban
</code></pre>
<hr />
<h1 id="heading-checking-the-apparmor-status">Checking the AppArmor Status.</h1>
<p>AppArmor, a Linux kernel security module, restricts per-program application capabilities like network access, raw socket access, and the permissions to read, write, or execute files.</p>
<ul>
<li>From the terminal, I check the AppArmor status:</li>
</ul>
<pre><code class="lang-bash">sudo aa-status --verbose
</code></pre>
<hr />
<h1 id="heading-installing-the-lynis-auditing-tool">Installing the Lynis Auditing Tool.</h1>
<p>Lynis is a flexible security auditing tool for systems running Linux, FreeBSD, macOS, OpenBSD, Solaris, and other Unix-like operating systems, helping administrators and security professionals scan and strengthen system security.</p>
<ul>
<li>From the terminal, I install Lynis:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y lynis
</code></pre>
<ul>
<li>I check the installation:</li>
</ul>
<pre><code class="lang-bash">lynis -V
</code></pre>
<ul>
<li>I run a basic scan:</li>
</ul>
<pre><code class="lang-bash">sudo lynis audit system --quick
</code></pre>
<blockquote>
<p>NOTE: The log file, which is purged every scan, is <code>/var/log/lynis.log</code> and the report file is <code>/var/log/lynis-report.dat</code>.</p>
</blockquote>
<ul>
<li>I check if Lynis is up-to-date:</li>
</ul>
<pre><code class="lang-bash">lynis update check
</code></pre>
<blockquote>
<p><strong>Attribution:</strong></p>
<p><a target="_blank" href="https://cisofy.com/lynis/">https://cisofy.com/lynis/</a> <strong><em>↗.</em></strong></p>
</blockquote>
<hr />
<h1 id="heading-the-results">The Results.</h1>
<p>Securing and hardening my Ubuntu 24.04 LTS system is a crucial step in protecting my data and maintaining system integrity. By following the outlined steps, such as updating my system, configuring unattended upgrades, setting up UFW, installing Fail2Ban, checking AppArmor status, and utilizing the Lynis auditing tool, I can significantly enhance my system's security posture. Regularly reviewing and updating these configurations will help ensure that my system remains resilient against potential threats.</p>
<hr />
<h1 id="heading-in-conclusion">In Conclusion.</h1>
<p>Is my Ubuntu 24.04 LTS system as secure as it could be?</p>
<p>In today's digital landscape, securing my system is more crucial than ever.</p>
<p>Here's a quick guide to hardening my Ubuntu system:</p>
<ol>
<li><p><strong>Update My System</strong>: Regular updates are my first line of defence.</p>
</li>
<li><p><strong>Unattended Upgrades</strong>: Automate my updates to ensure I’m always protected.</p>
</li>
<li><p><strong>Configure UFW</strong>: Set up my firewall to control incoming and outgoing traffic.</p>
</li>
<li><p><strong>Install Fail2Ban</strong>: Protect against brute-force attacks by banning suspicious IPs.</p>
</li>
<li><p><strong>Check AppArmor Status</strong>: Ensure my applications have the right permissions.</p>
</li>
<li><p><strong>Use Lynis Auditing Tool</strong>: Regularly audit my system for vulnerabilities.</p>
</li>
</ol>
<p>By following these steps, I can significantly enhance my system's security posture.</p>
<p>Remember, regular reviews and updates are key to staying resilient against potential threats.</p>
<p>🔍 How do you ensure your systems are secure? Share your tips below!</p>
<p>Until next time: Be safe, be kind, be awesome.</p>
<hr />
<h1 id="heading-hash-tags">Hash Tags.</h1>
<p>#UbuntuSecurity #LinuxHardening #SystemSecurity #CyberSecurity #Fail2Ban #UFW #AppArmor #Lynis #StaySecure #Ubuntu2404LTS #TechTips</p>
]]></content:encoded></item><item><title><![CDATA[Installing DaVinci Resolve 19 Studio on Ubuntu 24.04 LTS.]]></title><description><![CDATA[Update: 15th March 2025.
TL;DR.
This post provides a step-by-step guide to installing DaVinci Resolve 19 Studio on Ubuntu 24.04 LTS. It covers downloading the software, preparing the download, ensuring compatibility with necessary libraries, and upda...]]></description><link>https://solodev.app/installing-davinci-resolve-19-studio-on-ubuntu-2404-lts</link><guid isPermaLink="true">https://solodev.app/installing-davinci-resolve-19-studio-on-ubuntu-2404-lts</guid><category><![CDATA[Blackmagic Design]]></category><category><![CDATA[DaVinci Resolve 19]]></category><category><![CDATA[DaVinci Resolve 19 Studio]]></category><category><![CDATA[Colour Grading]]></category><category><![CDATA[Video Post]]></category><category><![CDATA[Video Content Creation]]></category><category><![CDATA[Film Editing]]></category><category><![CDATA[Creative Editing]]></category><category><![CDATA[Professional Video Editing]]></category><category><![CDATA[Audio Post Production]]></category><category><![CDATA[Linux Video Editing]]></category><category><![CDATA[Video Editing]]></category><category><![CDATA[video production]]></category><category><![CDATA[Video post production]]></category><category><![CDATA[motion graphics]]></category><dc:creator><![CDATA[Brian King]]></dc:creator><pubDate>Thu, 10 Oct 2024 09:00:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730269534815/3e2ba15c-b6b3-4da6-b3c5-e53f737609a3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Update: 15th March 2025.</p>
<h1 id="heading-tldr">TL;DR.</h1>
<p>This post provides a step-by-step guide to installing DaVinci Resolve 19 Studio on Ubuntu 24.04 LTS. It covers downloading the software, preparing the download, ensuring compatibility with necessary libraries, and updating my drivers. By following these instructions, I can effectively set up DaVinci Resolve Studio for professional video editing on my Linux system.</p>
<blockquote>
<p>NOTE: This guide has only been tested with the Studio version of Resolve. Let me know, in the comments below, if this process also works with the FREE version of DaVinci Resolve. Thank-you.</p>
<p><strong>Attributions:</strong></p>
<p><a target="_blank" href="https://hackmd.io/@brlin/install-davinci-resolve-19-on-ubuntu-2404">hackmd.io</a> <strong><em>↗,</em></strong></p>
<p><a target="_blank" href="https://linux.how2shout.com/how-to-install-davinci-resolve-on-ubuntu-22-04-lts-jammy/">how2shout.com</a> <strong><em>↗.</em></strong></p>
</blockquote>
<hr />
<h1 id="heading-an-introduction">An Introduction.</h1>
<p>Resolve is a great tool for performing many video post production activities.</p>
<blockquote>
<p>The purpose of this post is to describe how to install Resolve Studio on Ubuntu.</p>
</blockquote>
<hr />
<h1 id="heading-the-big-picture">The Big Picture.</h1>
<p>DaVinci Resolve Studio is traditionally thought of as an industry-standard colour correction and colour grading system. But since 2009, Resolve Studio has slowly evolved into a complete, powerful, video production tool with features like:</p>
<ul>
<li><p>Video editing,</p>
</li>
<li><p>Special effects,</p>
</li>
<li><p>Motion graphics, and</p>
</li>
<li><p>Audio post-production.</p>
</li>
</ul>
<p>For professional film productions, the Studio version of Resolve actively supports the workflows of many on-set DITs (Digital Imaging Technicians) as well as activities like editing, colour correction, colour grading, visual effects, green screen work, matte paintings, sound effects, audio post-processing and balancing, and audio mixing.</p>
<hr />
<h1 id="heading-prerequisites">Prerequisites.</h1>
<ul>
<li>A Debian-based Linux distro (I use Ubuntu).</li>
</ul>
<hr />
<h1 id="heading-updating-my-base-system">Updating my Base System.</h1>
<ul>
<li>From the (base) terminal, I update my (base) system:</li>
</ul>
<pre><code class="lang-python">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<blockquote>
<p>NOTE: The Ollama LLM manager is already installed on my (base) system.</p>
</blockquote>
<hr />
<h1 id="heading-what-is-davinci-resolve-studio">What is DaVinci Resolve Studio?</h1>
<p>DaVinci Resolve Studio is a paid video editing, colour grading and correction, visual effects, motion graphics, and audio post-production application for macOS, Windows, and Linux. It was originally developed by da Vinci Systems as da Vinci Resolve. Then in 2009, when da Vinci Systems was acquired by Blackmagic Design, the application was rebranded as DaVinci Resolve.</p>
<p><a target="_blank" href="https://www.blackmagicdesign.com/%E2%86%97">https://www.blackmagicdesign.com/</a><strong><em>↗.</em></strong></p>
<h2 id="heading-installing-the-required-packages">Installing the Required Packages.</h2>
<p>In a terminal, I install the required packages:</p>
<pre><code class="lang-bash">sudo apt install -y libqt5x11extras5
</code></pre>
<h2 id="heading-downloading-davinci-resolve-studio">Downloading DaVinci Resolve Studio.</h2>
<ul>
<li>I download the latest copy of DaVinci Resolve 19 Studio:</li>
</ul>
<p><a target="_blank" href="https://www.blackmagicdesign.com/au/support/family/davinci-resolve-and-fusion">https://www.blackmagicdesign.com/au/support/family/davinci-resolve-and-fusion</a></p>
<h2 id="heading-unzipping-the-downloaded-file">Unzipping the Downloaded File.</h2>
<ul>
<li>In a terminal, I go to the Downloads directory:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> ~/Downloads
</code></pre>
<ul>
<li>I check if the downloaded ZIP file exists:</li>
</ul>
<pre><code class="lang-bash">ls
</code></pre>
<ul>
<li>I install UNZIP (if required):</li>
</ul>
<pre><code class="lang-bash">sudo apt install unzip
</code></pre>
<ul>
<li>I extract the contents of the ZIP file:</li>
</ul>
<pre><code class="lang-bash">sudo unzip ./DaVinci_Resolve_*_Linux.zip
</code></pre>
<h2 id="heading-changing-the-mode-of-the-run-file">Changing the Mode of the RUN File.</h2>
<ul>
<li>I change the mode of the extracted RUN file to an executable:</li>
</ul>
<pre><code class="lang-bash">sudo chmod +x ./DaVinci_Resolve_Studio_*_Linux.run
</code></pre>
<h2 id="heading-installing-the-software">Installing the Software.</h2>
<ul>
<li>I install the software:</li>
</ul>
<pre><code class="lang-bash">sudo SKIP_PACKAGE_CHECK=1 ./DaVinci_Resolve_*_Linux.run -i
</code></pre>
<blockquote>
<p>NOTE: This command does not check for missing libraries during the installation.</p>
</blockquote>
<ul>
<li>I change to the <code>libs</code> directory:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> /opt/resolve/libs
</code></pre>
<ul>
<li>I make a new directory called <code>disabled_libs</code>:</li>
</ul>
<pre><code class="lang-bash">sudo mkdir ./disabled_libs
</code></pre>
<ul>
<li>I move the <code>libglib</code>, <code>libgio</code>, and <code>libgmodule</code> into the <code>disabled_libs</code> directory:</li>
</ul>
<pre><code class="lang-bash">sudo mv libglib-2.0.so* libgio-2.0.so* libgmodule-2.0.so* disabled_libs/
</code></pre>
<blockquote>
<p>NOTE: Moving these libraries forces Resolve to use the Ubuntu 24.04 LTS libraries.</p>
</blockquote>
<ul>
<li>I update my NVIDIA drivers, if required:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y nvidia-driver-550
</code></pre>
<blockquote>
<p>NOTE: Resolve Studio requires the 550 drivers (or later) for NVIDIA GTX GPUs.</p>
</blockquote>
<ul>
<li>I run the application:</li>
</ul>
<pre><code class="lang-bash">/opt/resolve/bin/resolve
</code></pre>
<hr />
<h1 id="heading-the-results">The Results.</h1>
<p>Installing DaVinci Resolve 19 Studio on Ubuntu 24.04 LTS involves several steps, including downloading the Resolve software and ensuring compatibility with the system drivers and libraries. By following this outlined process, I can successfully set up a powerful video editing tool on my Linux system. With Resolve Studio, I can enhance my video editing capabilities and create professional-level video content.</p>
<hr />
<h1 id="heading-in-conclusion">In Conclusion.</h1>
<p>I discovered how to install DaVinci Resolve 19 Studio on Ubuntu 24.04!</p>
<p>DaVinci Resolve is a powerhouse application from Blackmagic Design. It is used for video editing, colour grading, visual effects, and more.</p>
<p>Here's my quick guide to getting started:</p>
<ol>
<li><p>I use a Debian-based Linux distro (like Ubuntu).</p>
</li>
<li><p>I update my base system via the terminal.</p>
</li>
<li><p>I download the latest version of DaVinci Resolve 19 Studio from the Blackmagic Design website.</p>
</li>
<li><p>I unzip the downloaded file.</p>
</li>
<li><p>I make the unzipped RUN file executable.</p>
</li>
<li><p>I install the software while repressing error messages.</p>
</li>
<li><p>I move specific libraries to another directory.</p>
</li>
<li><p>I update my NVIDIA drivers to 550 or later.</p>
</li>
<li><p>I run the application and start creating professional-level video content!</p>
</li>
</ol>
<p>By following these steps, I can harness the full potential of DaVinci Resolve Studio on my Linux system.</p>
<p>Are you ready to transform your video editing experience? What projects are you excited to work on with DaVinci Resolve Studio? Let me know in the comments!</p>
<p>Until next time: Be safe, be kind, be awesome.</p>
<hr />
<h1 id="heading-hash-tags">Hash Tags.</h1>
<p>#BlackmagicDesign #DaVinciResolve19 #DaVinciResolve19Studio #VideoEditing #ColourGrading #VideoProduction #VideoPost #VideoPostProduction #VideoContentCreation #FilmEditing #CreativeEditing #ProfessionalVideoEditing #MotionGraphics #AudioPostProduction #LinuxVideoEditing</p>
]]></content:encoded></item><item><title><![CDATA[Domain Names for 2024-2025.]]></title><description><![CDATA[TL;DR.
This post explores the significance of domain names in app and website development. It highlights the role of domain names in defining business units, generating income, and showcasing development projects. This article also provides a list of...]]></description><link>https://solodev.app/domain-names-for-2024-2025</link><guid isPermaLink="true">https://solodev.app/domain-names-for-2024-2025</guid><category><![CDATA[Smart Phone Apps]]></category><category><![CDATA[Onboarding App]]></category><category><![CDATA[domain names]]></category><category><![CDATA[Tech Innovation,]]></category><category><![CDATA[AI]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[Business growth ]]></category><category><![CDATA[Digital Transformation]]></category><category><![CDATA[digital presence]]></category><category><![CDATA[future tech]]></category><category><![CDATA[app development]]></category><category><![CDATA[website development,]]></category><category><![CDATA[Learning Management System]]></category><category><![CDATA[email marketing]]></category><category><![CDATA[business management ]]></category><dc:creator><![CDATA[Brian King]]></dc:creator><pubDate>Mon, 24 Jun 2024 10:00:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1718847305606/063831fd-ac63-4efb-a22a-80214780f830.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<h1 id="heading-tldr">TL;DR.</h1>
<p>This post explores the significance of domain names in app and website development. It highlights the role of domain names in defining business units, generating income, and showcasing development projects. This article also provides a list of current and pending domain names, each representing unique projects across media, technology, business, and AI-driven applications. There is a palpable excitement about these projects and I invite my readers to share their thoughts on the most exciting domain-named project.</p>
<hr />
<h1 id="heading-an-introduction">An Introduction.</h1>
<p>The following is a list of current, and pending, domain names:</p>
<blockquote>
<p>The purpose of this post is to list the projects the are under current development.</p>
</blockquote>
<hr />
<h1 id="heading-the-big-picture">The Big Picture.</h1>
<p>A domain name is my first step in creating an app or website. There are a number of reasons for these apps, including:</p>
<ul>
<li><p>Defining various, internal business units,</p>
</li>
<li><p>Building income generating business units,</p>
</li>
<li><p>Producing business units that can be sold to other companies, and</p>
</li>
<li><p>Showcasing the range of apps and websites that I am able to produce.</p>
</li>
</ul>
<hr />
<h1 id="heading-prerequisites">Prerequisites.</h1>
<ul>
<li>There are no prerequisites for enjoying this post.</li>
</ul>
<hr />
<h1 id="heading-what-is-a-domain-name">What is a Domain Name?</h1>
<p>The Internet uses an octet (8-bit) numbering system to identify the locations of the numerous, online resources. These locations are called IPs, or Internet Protocol v4 addresses. (IP v6 addresses are also available, but they are rarely used.) Trying to remember 4 sets of numbers is tiresome so, in the early 1970s, domain names were introduced. When I type a domain name into the address bar of a browser, that name resolves to a specific IP address thanks to the Domain Name System, or DNS. DNS is the phone book of the Internet and uses a distributed service that resolves domain names to IP addresses. (New domains can take up to 48-hours to resolve.) Now that the browser knows the IP address, it can start interacting with the services at that given location. The computer at that specific IP address may be hosting any number of services, including (but not limited to):</p>
<ul>
<li><p>Web servers,</p>
</li>
<li><p>Email servers,</p>
</li>
<li><p>Gaming servers, and</p>
</li>
<li><p>FTP (File Transfer Protocol) servers.</p>
</li>
</ul>
<p>Every device gets an IP address when it connects to the Internet. These dynamic IP addresses are pooled by each Internet Service Provider, or ISP. When a device disconnects from the Internet, the IP address is released and returned to the ISP pool. Companies (and people) that use domain names typically rent space on someone else's servers. They go to their domain name provider, point the domain name(s) to the DNS (Domain Name System) run by the host, activates some settings on the hosting service, and the domain name <em>eventually</em> points to the provided service (web hosting, email server, etc.) Less frequently, people like me rent static IPs from our ISPs, pass our domain name(s) through Cloudflare, use Cloudflare to resolve our domain names to our static IP addresses, and host our own services (web servers, email servers, etc.) on our own on-prem servers. (In my case, I use an Intel NUC v10.)</p>
<hr />
<h2 id="heading-domain-name-table">Domain Name Table.</h2>
<p>Here is a list of the domains that are, or will be, available to my company. Some of these domains are [pending] and will be added to this list as they are secured.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Domain Name</td><td>Description</td><td>Colour</td><td>Type</td></tr>
</thead>
<tbody>
<tr>
<td>Monty.co.nz</td><td>Mobile Video Camera</td><td>Blue (media)</td><td>App</td></tr>
<tr>
<td>SoloDev.app</td><td>Technology Blog</td><td>Red (dev)</td><td>App</td></tr>
<tr>
<td>[pending]</td><td>Learning Management</td><td>Green (business)</td><td>App</td></tr>
<tr>
<td>[pending]</td><td>Automated Teleprompting</td><td>Green (business)</td><td>App</td></tr>
<tr>
<td>[pending]</td><td>Arts &amp; Crafts Blog</td><td>Blue (media)</td><td>Site</td></tr>
<tr>
<td>[pending]</td><td>Vector Animation Tool</td><td>Blue (media)</td><td>Site</td></tr>
<tr>
<td>AnalogModel.com</td><td>Analog Language Models</td><td>Red (dev)</td><td>App</td></tr>
<tr>
<td>WearyCoder.com</td><td>Grandpa Vlog</td><td>Red (dev)</td><td>App</td></tr>
<tr>
<td>[pending]</td><td>UML &amp; ERD Designer</td><td>Red (dev)</td><td>Site</td></tr>
<tr>
<td>MyHomePaige.com</td><td>Client Project</td><td>Green (business)</td><td>App</td></tr>
<tr>
<td>dcx.sx</td><td>DigitalCoreNZ API Library</td><td>Red (dev)</td><td>Site</td></tr>
<tr>
<td>RoboticsFab.com</td><td>Robotic &amp; Hardware Designs</td><td>Red (dev)</td><td>Site</td></tr>
<tr>
<td>[pending]</td><td>DJ Mashup &amp; Mix Videos</td><td>Blue (media)</td><td>App</td></tr>
<tr>
<td>[pending]</td><td>React Native/Expo/Tauri AI</td><td>Purple (AI)</td><td>Site</td></tr>
<tr>
<td>DiaryNotes.org</td><td>Diary Management</td><td>Green (business)</td><td>App</td></tr>
<tr>
<td>GardenPatch.org</td><td>Garden Management</td><td>Green (business)</td><td>App</td></tr>
<tr>
<td>DripMail.net</td><td>Email Management</td><td>Green (business)</td><td>Site</td></tr>
<tr>
<td>AccountsLite.com</td><td>Accounts Management</td><td>Green (business)</td><td>App</td></tr>
<tr>
<td>CompanyLite.com</td><td>Business Management</td><td>Green (business)</td><td>App</td></tr>
<tr>
<td>DigitalCoreNZ.com</td><td>Redirection Domain</td><td>Green (business)</td><td>App</td></tr>
<tr>
<td>GroceryCart.org</td><td>Groceries Management</td><td>Green (business)</td><td>App</td></tr>
<tr>
<td>RecipeAlbum.org</td><td>Recipes Management</td><td>Green (business)</td><td>App</td></tr>
<tr>
<td>DigitalCore.co.nz</td><td>Company Website</td><td>Green (business)</td><td>App</td></tr>
<tr>
<td>[pending]</td><td>Redirection Domain</td><td>Green (business)</td><td>App</td></tr>
</tbody>
</table>
</div><hr />
<h1 id="heading-the-results">The Results.</h1>
<p>The domain names listed here represent a diverse range of projects and business units that highlight my capabilities in app and website development. Each domain serves a unique purpose, from media and technology sites to business and AI-driven applications. As these projects continue to develop and new domains are secured, they will further demonstrate the breadth and depth of what can be achieved in the ever-evolving digital landscape. Stay tuned for updates and new additions to this dynamic portfolio.</p>
<hr />
<h1 id="heading-in-conclusion">In Conclusion.</h1>
<p>Thanks for exploring the future of my domain-named projects for 2024 and 2025.</p>
<p>In this post, I dove deep into the world of domain names and their pivotal role in app and website development. From defining internal business units to building income-generating projects, domain names are the first crucial step in creating my digital presence. But what exactly are domain names, and why are they important?</p>
<p>A domain name is more than just an address; it's a gateway to various online services like web servers, email servers, and more. Thanks to the Domain Name System (DNS), these names resolve to specific IP addresses, making it easier for users to navigate the web.</p>
<p>Here's a sneak peek at some of the exciting domains that are under development:</p>
<ul>
<li><p>Monty.co.nz - A smart phone camera app,</p>
</li>
<li><p><a target="_blank" href="https://SoloDev.app">SoloDev.app</a> - A tech blog for developers,</p>
</li>
<li><p>AuditorLog.com - An onboarding app for apprentices,</p>
</li>
<li><p>TechLearnAI.com - A learning management system powered by AI,</p>
</li>
<li><p>TechNewsAI.com - An AI-driven technology news publishing app, and</p>
</li>
<li><p>CompanyLite.com - A business management app.</p>
</li>
</ul>
<p>And I have other exciting endeavours on the horizon!!</p>
<p>Each domain represents a unique project that showcases a diverse range of capabilities, from media and technology, to business and AI-driven applications.</p>
<p>Curious to know more about these projects and how they might transform the digital landscape? I know <em>I'm</em> excited to start building these entities!!</p>
<p>What do you think is the most exciting domain-named project for 2024 and 2025? Let me know in the comments!</p>
<p>Until next time: Be safe, be kind, be awesome.</p>
<hr />
<h1 id="heading-hash-tags">Hash Tags.</h1>
<p>#SmartPhoneApps #OnboardingApp #DomainNames #TechInnovation #AI #WebDevelopment #BusinessGrowth #DigitalTransformation #DigitalPresence #FutureTech #AppDevelopment #WebsiteDevelopment #LearningManagementSystem #EmailMarketing #BusinessManager #AIApplications #TechProjects #TechBlog #MediaTech</p>
]]></content:encoded></item><item><title><![CDATA[Installing WhisperAI from OpenAI... for FREE.]]></title><description><![CDATA[TL;DR.
This post guides me through the process of installing WhisperAI from OpenAI for FREE. It covers prerequisites like a Debian-based Linux distro and Miniconda, steps to update my system, and installing essential technologies such as FFmpeg, CUDA...]]></description><link>https://solodev.app/installing-whisperai-from-openai-for-free</link><guid isPermaLink="true">https://solodev.app/installing-whisperai-from-openai-for-free</guid><category><![CDATA[WhisperAI]]></category><category><![CDATA[openai]]></category><category><![CDATA[Speech Recognition]]></category><category><![CDATA[Transcription]]></category><category><![CDATA[Language translation]]></category><category><![CDATA[AI]]></category><category><![CDATA[Python]]></category><category><![CDATA[pytorch]]></category><category><![CDATA[NVIDIA]]></category><category><![CDATA[cuda]]></category><category><![CDATA[FFmpeg]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[AI Research]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[Deep Learning]]></category><dc:creator><![CDATA[Brian King]]></dc:creator><pubDate>Mon, 27 May 2024 10:00:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1716551061437/d0e6b36b-8a2c-41c2-89ca-7ea3a3b0fe65.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<h1 id="heading-tldr">TL;DR.</h1>
<p>This post guides me through the process of installing WhisperAI from OpenAI for FREE. It covers prerequisites like a Debian-based Linux distro and Miniconda, steps to update my system, and installing essential technologies such as FFmpeg, CUDA Toolkit, PyTorch, and Whisper. It also covers how to set up a Miniconda environment, creating directories, and running Whisper for speech recognition and translation. A future post will further explore how to use Python and Whisper together.</p>
<blockquote>
<p><strong>Attributions:</strong></p>
<p><a target="_blank" href="https://docs.anaconda.com/free/miniconda/index.html">https://docs.anaconda.com/free/miniconda/index.html</a><strong><em>↗,</em></strong></p>
<p><a target="_blank" href="https://openai.com/index/whisper/">https://openai.com/index/whisper/</a><strong><em>↗,</em></strong></p>
<p><a target="_blank" href="https://github.com/openai/whisper">https://github.com/openai/whisper</a><strong><em>↗, and</em></strong></p>
<p><a target="_blank" href="https://www.youtube.com/watch?v=ABFqbY_rmEk">https://www.youtube.com/watch?v=ABFqbY_rmEk</a><strong><em>↗.</em></strong></p>
</blockquote>
<hr />
<h1 id="heading-an-introduction">An Introduction.</h1>
<p>One of the earliest, and most important, AI projects for DigitalCore to complete is the VocalCue.com app. Whisper <em>may</em> become a lynch-pin technology for the success of the VocalCue project.</p>
<blockquote>
<p>The purpose of this post is to install Whisper, and explore its' basic functionality.</p>
</blockquote>
<p>Please note that VocalCue.com is a working title and <em>may</em> be replaced on launch.</p>
<hr />
<h1 id="heading-the-big-picture">The Big Picture.</h1>
<p>Whisper is an open-source project from OpenAI and is available to the public under the MIT license. If I decide to use Whisper as part of my VocalCue teleprompter project, then I will also open-source my code under the same license. This post is all about installing the Whisper tool, as well as three other supporting technologies.</p>
<hr />
<h1 id="heading-prerequisites">Prerequisites.</h1>
<ul>
<li><p>A Debian-based Linux distro (I use Ubuntu),</p>
</li>
<li><p><a target="_blank" href="https://solodev.app/installing-miniconda">Miniconda</a>.</p>
</li>
</ul>
<hr />
<h1 id="heading-updating-my-base-system">Updating my Base System.</h1>
<ul>
<li>From the (base) terminal, I update my (base) system:</li>
</ul>
<pre><code class="lang-python">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<blockquote>
<p>NOTE: The Ollama LLM manager is already installed on my (base) system.</p>
</blockquote>
<hr />
<h1 id="heading-what-is-ffmpeg">What is FFmpeg?</h1>
<p>FFmpeg is a top multimedia tool that can decode, encode, transcode, mux, demux, stream, filter, and play almost any format created by humans or machines. It supports both old and new formats, whether made by a standards group, the community, or a company. FFmpeg is also very portable: it can be compiled, run, and tested on Linux, Mac OS X, Windows, BSDs, Solaris, and more, across different build environments, machine types, and setups.</p>
<p><a target="_blank" href="https://ffmpeg.org/">https://ffmpeg.org/</a> <strong><em>↗.</em></strong></p>
<h2 id="heading-1-of-4-installing-ffmpeg">1 of 4: Installing FFmpeg.</h2>
<ul>
<li>I check to see which version of FFmpeg is installed:</li>
</ul>
<pre><code class="lang-bash">ffmpeg -v
</code></pre>
<blockquote>
<p>NOTE: FFmpeg will need installing if it is not already on my system.</p>
</blockquote>
<ul>
<li>I install FFmpeg, if required, into my (base) environment:</li>
</ul>
<pre><code class="lang-bash">sudo apt update &amp;&amp; sudo apt install ffmpeg
</code></pre>
<h1 id="heading-what-is-cuda">What is CUDA?</h1>
<p>CUDA is a parallel computing platform and programming model created by NVIDIA. It has been downloaded over 20 million times and helps developers speed up their applications using GPU accelerators. CUDA is used in many fields, not just high-performance computing and research. For example, pharmaceutical companies use CUDA to find new treatments, cars use it to improve self-driving, and stores use it to analyse customer data for recommendations and ads.</p>
<p>Some people think CUDA, launched in 2006, is just a programming language or an API. But with over 150 CUDA-based libraries, SDKs, and tools, it's much more than that. NVIDIA keeps innovating, and thousands of GPU-accelerated applications use the NVIDIA CUDA platform. CUDAs flexibility and programmability make it the top choice for developing new deep learning and parallel computing algorithms.</p>
<p>CUDA also helps developers easily use the latest GPU features, like those in the NVIDIA Ampere GPU architecture.</p>
<p><a target="_blank" href="https://blogs.nvidia.com/blog/what-is-cuda-2/">https://blogs.nvidia.com/blog/what-is-cuda-2/</a> <strong><em>↗.</em></strong></p>
<h2 id="heading-2-of-4-installing-the-cuda-toolkit">2 of 4: Installing the CUDA Toolkit.</h2>
<blockquote>
<p>NOTE: This section only applies to PCs that use NVIDIA GPUs. Other graphics processors, i.e. AMD GPUs, will need a different process. I replaced my SAPPHIRE Radeon Nitro+ RX-580 for an NVIDIA RTX-3060 with 12GB of VRAM.)</p>
</blockquote>
<ul>
<li>The following command is available from the NVIDIA website:</li>
</ul>
<p><a target="_blank" href="https://developer.nvidia.com/cuda-downloads">https://developer.nvidia.com/cuda-downloads</a><strong><em>↗.</em></strong></p>
<blockquote>
<p>NOTE: I use the map at the above website to define the following command.</p>
</blockquote>
<ul>
<li>I install the NVIDIA CUDA Toolkit into my (base) environment:</li>
</ul>
<pre><code class="lang-bash">sudo apt install -y nvidia-cuda-toolkit
</code></pre>
<ul>
<li>I check my installed CUDA version:</li>
</ul>
<pre><code class="lang-bash">nvcc --version
</code></pre>
<hr />
<h1 id="heading-what-is-anaconda-and-miniconda">What is Anaconda and Miniconda?</h1>
<p>Python projects can run in virtual environments. These isolated spaces are used to manage project dependencies. Different versions of the same package can run in different environments while avoiding version conflicts.</p>
<p>venv is a built-in Python 3.3+ module that runs virtual environments. Anaconda is a Python and R distribution for scientific computing that includes the <code>conda</code> package manager. Miniconda is a small, free, bootstrap version of Anaconda that also includes the <code>conda</code> package manager, Python, and other packages that are required or useful (like pip and zlib).</p>
<p><a target="_blank" href="http://www.anaconda.com/">http://www.anaconda.com/</a><strong><em>↗,</em></strong></p>
<p><a target="_blank" href="https://docs.anaconda.com/free/miniconda/index.html">https://docs.anaconda.com/free/miniconda/index.html</a><strong><em>↗, and</em></strong></p>
<p><a target="_blank" href="https://solodev.app/installing-miniconda">https://solodev.app/installing-miniconda</a>.</p>
<p>I ensure <a target="_blank" href="https://solodev.app/installing-miniconda">Miniconda is installed</a> (<code>conda -V</code>) before continuing with this post.</p>
<h2 id="heading-creating-a-miniconda-environment">Creating a Miniconda Environment.</h2>
<ul>
<li>I use the <code>conda</code> command to display a <code>list</code> of Miniconda <code>env</code>ironments:</li>
</ul>
<pre><code class="lang-bash">conda env list
</code></pre>
<ul>
<li>I use <code>conda</code> to <code>create</code>, and <code>activate</code>, a new environment named (-n) (Whisper):</li>
</ul>
<pre><code class="lang-bash">conda create -n Whisper python=3.10 -y &amp;&amp; conda activate Whisper
</code></pre>
<blockquote>
<p>NOTE: This command creates the (Whisper) environment, then activates the (Whisper) environment.</p>
</blockquote>
<h2 id="heading-creating-the-whisper-home-directory">Creating the Whisper Home Directory.</h2>
<blockquote>
<p>NOTE: I will now define the home directory in the environment directory.</p>
</blockquote>
<ul>
<li>I create the <code>Whisper</code> home directory:</li>
</ul>
<pre><code class="lang-bash">mkdir ~/Whisper
</code></pre>
<ul>
<li>I make new directories within the (Whisper) environment:</li>
</ul>
<pre><code class="lang-bash">mkdir -p ~/miniconda3/envs/Whisper/etc/conda/activate.d
</code></pre>
<ul>
<li>I use the Nano text editor to create the <code>set_working_directory.sh</code> shell script:</li>
</ul>
<pre><code class="lang-bash">sudo nano ~/miniconda3/envs/Whisper/etc/conda/activate.d/set_working_directory.sh
</code></pre>
<ul>
<li>I copy the following, paste (CTRL + SHIFT + V) it to the <code>set_working_directory.sh</code> script, save (CTRL + S) the changes, and exit (CTRL + X) Nano:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> ~/Whisper
</code></pre>
<ul>
<li>I activate the (base) environment:</li>
</ul>
<pre><code class="lang-bash">conda activate
</code></pre>
<ul>
<li>I activate the (Whisper) environment:</li>
</ul>
<pre><code class="lang-bash">conda activate Whisper
</code></pre>
<blockquote>
<p>NOTE: I should now, by default, be in the <code>~/Whisper</code> home directory.</p>
</blockquote>
<hr />
<h1 id="heading-what-is-pytorch">What is PyTorch?</h1>
<p>PyTorch is an open-source deep learning framework known for its flexibility and ease of use. It works well with Python, a popular language among machine learning developers and data scientists. PyTorch is a complete framework for building deep learning models, which are often used in tasks like image recognition and language processing. Since it is written in Python, most machine learning developers find it easy to learn and use. PyTorch was created by developers at Facebook AI Research and other labs. It combines fast and flexible GPU-accelerated back-end libraries from Torch.ch with an easy-to-use Python frontend. This makes it great for quick prototyping, readable code, and supporting many types of deep learning models. PyTorch allows AI engineers to use a familiar programming style while still creating graphs. It was open-sourced in 2017, and its' Python base has made it popular with machine learning developers.</p>
<p><a target="_blank" href="https://pytorch.org/">https://pytorch.org/</a><strong><em>↗.</em></strong></p>
<h2 id="heading-3-of-4-installing-pytorch">3 of 4: Installing PyTorch.</h2>
<ul>
<li>The following command is available from the PyTorch website:</li>
</ul>
<p><a target="_blank" href="https://pytorch.org/get-started/locally/">https://pytorch.org/get-started/locally/</a><strong><em>↗.</em></strong></p>
<blockquote>
<p>NOTE: I use the map at the above website to define the following command.</p>
</blockquote>
<ul>
<li>I install PyTorch in the (Whisper) environment:</li>
</ul>
<pre><code class="lang-bash">pip3 install torch torchvision torchaudio \
--index-url https://download.pytorch.org/whl/cu118
</code></pre>
<hr />
<h1 id="heading-what-is-whisperai">What is WhisperAI?</h1>
<p>From the GitHub page↗:</p>
<p>Whisper is a speech recognition model. It is trained on a large set of audio types and can perform multiple tasks like recognizing different languages, transcribing spoken text, and translating multiple languages to English.</p>
<h2 id="heading-4-of-4-installing-whisper-ai">4 of 4: Installing Whisper AI.</h2>
<ul>
<li>I use the pip command to install Whisper into the (Whisper) environment:</li>
</ul>
<pre><code class="lang-bash">pip install -U openai-whisper
</code></pre>
<blockquote>
<p>NOTE: The <code>-U</code> flag means upgrade, if Whisper is already installed.</p>
</blockquote>
<h2 id="heading-running-whisperai">Running WhisperAI.</h2>
<ul>
<li><p>I place an audio file, called test.wav, into the Whisper directory.</p>
</li>
<li><p>I run the following command:</p>
</li>
</ul>
<pre><code class="lang-bash">whisper test1.wav
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716460462739/8760c990-c0cb-4c3a-8a11-5c69101ff16a.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>NOTE: I didn't set the <code>--language</code> flag. Whisper decided I was speaking Maori. Actually, the recording is of a Maori speaking. The model may have detected my accent. Or not. Who knows?</p>
</blockquote>
<ul>
<li>Next, I test Whisper to see if FFmpeg can provide it with the original MP4 file:</li>
</ul>
<pre><code class="lang-bash">whisper test2.mp4
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716460503283/0bcbcfa2-6854-4a5d-83c8-f102a170d2e3.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>NOTE: The MP4 test worked. Nice.</p>
</blockquote>
<ul>
<li>Each time Whisper runs, it generates a number of files:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716461872339/d22d8ed6-63a2-4aaa-be1c-4a06e9b5ee8a.png" alt /></p>
<blockquote>
<p>NOTE: Whisper can transcribe multiple files, where each file name is separated by a space.</p>
</blockquote>
<h2 id="heading-whisper-has-5-model-sizes">Whisper has 5 Model Sizes.</h2>
<blockquote>
<p>NOTE: By default, Whisper uses the small model but there are other models, too.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716462414889/26adad49-2d46-4ea4-95c2-42f97ca07ae6.png" alt class="image--center mx-auto" /></p>
<ul>
<li>This time, I test Whisper on a bit of Shakespeare using a different model:</li>
</ul>
<pre><code class="lang-bash">whisper richard3.mp4 --model medium
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716461441658/a926be29-f994-4284-bfae-3f1d0543a084.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>NOTE: Whisper got most of it, but my recording wasn't great. (I didn't warm up.)</p>
</blockquote>
<h2 id="heading-transcribing-and-translating-other-languages">Transcribing and Translating Other Languages.</h2>
<ul>
<li>I can use the <code>--language</code> flag for recordings in other languages, instead of forcing Whisper to use auto-detect feature within the first 30-seconds:</li>
</ul>
<pre><code class="lang-bash">whisper french.wav --language French
</code></pre>
<ul>
<li>I can also translate the audio thanks to the <code>--task</code> flag:</li>
</ul>
<pre><code class="lang-bash">whisper french.wav --language French --task translate
</code></pre>
<blockquote>
<p>NOTE: Translate only works from other languages to English. The other languages include (as per the <code>--help</code> flag) Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Bashkir, Basque, Belarusian, Bengali, Bosnian, Breton, Bulgarian, Burmese, Cantonese, Castilian, Catalan, Chinese, Croatian, Czech, Danish, Dutch, English, Estonian, Faroese, Finnish, Flemish, French, Galician, Georgian, German, Greek, Gujarati, Haitian, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Lao, Latin, Latvian, Letzeburgesch, Lingala, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Mandarin, Maori, Marathi, Moldavian, Moldovan, Mongolian, Myanmar, Nepali, Norwegian, Nynorsk, Occitan, Panjabi, Pashto, Persian, Polish, Portuguese, Punjabi, Pushto, Romanian, Russian, Sanskrit, Serbian, Shona, Sindhi, Sinhala, Sinhalese, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tagalog, Tajik, Tamil, Tatar, Telugu, Thai, Tibetan, Turkish, Turkmen, Ukrainian, Urdu, Uzbek, Valencian, Vietnamese, Welsh, Yiddish, and Yoruba.</p>
<p>Whispers' performance varies widely depending on the language. Translations are not perfect, but they are good enough to be understood.</p>
</blockquote>
<p>From <a target="_blank" href="https://github.com/openai/whisper?tab=readme-ov-file#available-models-and-languages">the GitHub repo</a>: The image below shows that a lower number means better performance. It is a breakdown of <code>large-v3</code> and <code>large-v2</code> models by language, using WERs (word error rates) or CER (character error rates, shown in <em>Italic</em>) evaluated on the Common Voice 15 and Fleurs datasets.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716467205447/d13b040e-47c8-439c-a746-25801b3b5ce6.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-the-next-step">The Next Step.</h2>
<p>In a follow-up post, I will look at <a target="_blank" href="https://github.com/openai/whisper#python-usage">using Python to manipulate Whisper</a>. Python is an easy-to-learn programming language that is popular with AI scientists and engineers.</p>
<hr />
<h1 id="heading-the-results">The Results.</h1>
<p>Installing WhisperAI from OpenAI is a straightforward process <em>if</em> I follow these steps and prerequisites. By ensuring I have the right environment, tools like FFmpeg, CUDA Toolkit, PyTorch, and Whisper itself, I can effectively set up and utilize this powerful speech recognition model. Whispers' ability to transcribe text and translate audio to English, from multiple languages, makes it a versatile tool for numerous applications. In a future post, I will explore more advanced usage and customization by using Python, opening up even more possibilities for integrating Whisper into my projects.</p>
<hr />
<h1 id="heading-in-conclusion">In Conclusion.</h1>
<p>I have unlocked the power of FREE speech recognition with Whisper.</p>
<p>I can install, and use, Whisper from OpenAI for FREE. Yes, you heard that right! Whisper, an open-source speech recognition model, is now available for everyone under the MIT license. I dived into how to set it up and start using it today.</p>
<p><strong>Prerequisites:</strong></p>
<ol>
<li><p>A Debian-based Linux distro (I use Ubuntu)</p>
</li>
<li><p>Miniconda</p>
</li>
</ol>
<p><strong>Steps to Get Started:</strong></p>
<ol>
<li><p><strong>I updated my (base) environment:</strong></p>
<ul>
<li>I make sure my system is up to date and stable.</li>
</ul>
</li>
<li><p><strong>I installed essential technologies:</strong></p>
<ul>
<li><p>FFmpeg<strong>:</strong> I checked for FFmpeg, and installed if it was not available.</p>
</li>
<li><p><strong>CUDA Toolkit:</strong> The CUDA toolkit is used to access NVIDIA GPUs.</p>
</li>
<li><p><strong>PyTorch:</strong> I installed PyTorch using the command from its' website.</p>
</li>
<li><p><strong>WhisperAI:</strong> I used pip to install Whisper into a virtual environment.</p>
</li>
</ul>
</li>
<li><p><strong>Set Up Miniconda Environment:</strong></p>
<ul>
<li>I created and activated a new environment for Whisper.</li>
</ul>
</li>
<li><p><strong>Created the Whisper Home Directory:</strong></p>
<ul>
<li>I set up directories and scripts to streamline my workflow.</li>
</ul>
</li>
</ol>
<p><strong>Running WhisperAI:</strong></p>
<ul>
<li><p>I placed audio files into the Whisper directory and ran the Whisper command.</p>
</li>
<li><p>Whisper can transcribe, and translate multiple languages to English, making it a versatile tool for numerous applications.</p>
</li>
</ul>
<p><strong>Whisper Models:</strong></p>
<ul>
<li>Whisper comes with 5 different model sizes. I chose the ones that fit my needs.</li>
</ul>
<p><strong>Next Steps:</strong></p>
<ul>
<li>In a future post, I will explore how to manipulate Whisper using Python, unlocking more flexibility for creating my own projects.</li>
</ul>
<p><strong>Results:</strong></p>
<ul>
<li>Setting up Whisper is straightforward if I follow these steps and prerequisites. With tools like FFmpeg, CUDA Toolkit, PyTorch, and Whisper, I can effectively utilize this powerful speech recognition model.</li>
</ul>
<p>Are you excited to try Whisper in your own projects? What other AI tools are you interested in exploring? Let's discuss in the comments below!</p>
<p>Until next time: Be safe, be kind, be awesome.</p>
<hr />
<p>#WhisperAI #OpenAI #SpeechRecognition #Transcription #LanguageTranslation #AI #Python #PyTorch #NVIDIA #CUDA #FFmpeg #AItools #AIresearch #MachineLearning #DeepLearning #Miniconda #TechInnovation #FreeTools #OpenSource #TechTutorial #Ubuntu #Linux #Programming</p>
<hr />
<p>1/ 🚀 Unlock the power of FREE speech recognition with WhisperAI from OpenAI! Dive into this step-by-step guide to install and explore Whisper for your projects. Ready to get started?</p>
<p>2/ First, ensure you have a Debian-based Linux distro (I use Ubuntu) and Miniconda installed. These are essential prerequisites for setting up WhisperAI.</p>
<p>3/ Update your base system to ensure it’s stable and up-to-date. This is a crucial step before installing any new software or dependencies.</p>
<p>4/ Whisper requires four key technologies: FFmpeg, CUDA Toolkit, PyTorch, and Whisper itself. I’ll walk you through installing each one.</p>
<p>5/ Check if FFmpeg is installed. If not, install it in your base environment. FFmpeg is crucial for handling audio files.</p>
<p>6/ If you have an NVIDIA GPU, install the CUDA Toolkit. This toolkit allows Whisper to leverage the power of your GPU for faster processing.</p>
<p>7/ Next, install PyTorch. This powerful deep learning framework is essential for running Whisper. Follow the command from the PyTorch website.</p>
<p>8/ With PyTorch installed, use pip to install Whisper into a new virtual environment created with Miniconda. This keeps dependencies isolated.</p>
<p>9/ Create and activate a new Miniconda environment specifically for Whisper. This keeps your Whisper setup organized and separate from other projects.</p>
<p>10/ Set up the Whisper home directory within your new environment. Create necessary directories and a script to streamline your workflow.</p>
<p>11/ Place your audio files into the Whisper directory and run the Whisper command. Whisper can transcribe and translate audio, making it incredibly versatile.</p>
<p>12/ Whisper offers five different model sizes. Choose the one that fits your needs for accuracy and performance. Experiment to find the best fit.</p>
<p>13/ In a future post, I’ll explore how to manipulate Whisper using Python, unlocking even more potential for your projects. Stay tuned!</p>
<p>14/ Setting up Whisper is straightforward if you follow these steps. With the right tools and environment, you can harness this powerful speech recognition model.</p>
<p>15/ Want to dive deeper? Read my full article on @SoloDev.app for a detailed guide on installing and using WhisperAI.</p>
<p>Until next time: Be safe, be kind, be awesome! 🚀</p>
]]></content:encoded></item><item><title><![CDATA[Using Flowise with Local LLMs.]]></title><description><![CDATA[TL;DR.
Using Flowise, with local LLMs like Ollama, allows for the creation of cost-effective, secure, and highly customizable AI-powered applications. Flowise provides a versatile environment that supports the integration of various tools and compone...]]></description><link>https://solodev.app/using-flowise-with-local-llms</link><guid isPermaLink="true">https://solodev.app/using-flowise-with-local-llms</guid><category><![CDATA[AI Workflow]]></category><category><![CDATA[Local LLMs]]></category><category><![CDATA[Flowise]]></category><category><![CDATA[AI]]></category><category><![CDATA[AI development]]></category><category><![CDATA[AI Applications]]></category><category><![CDATA[#chatbots]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[Tech Innovation,]]></category><category><![CDATA[data privacy]]></category><category><![CDATA[Operational Security]]></category><dc:creator><![CDATA[Brian King]]></dc:creator><pubDate>Sun, 12 May 2024 10:00:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1715493896351/eaa7c848-d5ec-4d8d-99d9-8a6191e7ea6f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-tldr">TL;DR.</h1>
<p>Using Flowise, with local LLMs like Ollama, allows for the creation of cost-effective, secure, and highly customizable AI-powered applications. Flowise provides a versatile environment that supports the integration of various tools and components, enhancing AI workflows and enabling the development of local chatbots and AI agents. This setup offers benefits in terms of data privacy and operational security, making it ideal for both business and personal AI projects.</p>
<blockquote>
<p><strong>Attributions:</strong></p>
<p>A YouTube video produced by <a target="_blank" href="https://www.youtube.com/@leonvanzyl">Leon van Zyl</a><strong><em>↗:</em></strong></p>
<p><a target="_blank" href="https://www.youtube.com/watch?v=85gZ7G-ze3c">https://www.youtube.com/watch?v=85gZ7G-ze3c</a><strong><em>↗.</em></strong></p>
</blockquote>
<hr />
<h1 id="heading-an-introduction">An Introduction.</h1>
<p>Low-code GUI solutions like Flowise allows developers to focus on the big picture while maintaining an eye on the details:</p>
<blockquote>
<p>The purpose of this post is to demonstrate how to build Chatflows.</p>
</blockquote>
<hr />
<h1 id="heading-the-big-picture">The Big Picture.</h1>
<p>This post is a continuation of the <a target="_blank" href="https://solodev.app/installing-langflow-and-flowise">Langflow and Flowise installation</a> post.</p>
<p>Flowise is an excellent low-code GUI for creating AI workflows and agents. It simplifies the use of LangChain and LlamaIndex solutions. LangChain is a Python library that makes it easier to develop, produce, and deploy applications using large language models (LLMs). It offers open-source components, integrations, and tools for different purposes, like question answering, chatbots, and more.</p>
<hr />
<h1 id="heading-prerequisites">Prerequisites.</h1>
<ul>
<li><p>A Debian-based Linux distro (I use Ubuntu),</p>
</li>
<li><p><a target="_blank" href="https://solodev.app/installing-miniconda">Miniconda</a>,</p>
</li>
<li><p><a target="_blank" href="https://solodev.app/installing-ollama">Ollama</a>,</p>
</li>
<li><p><a target="_blank" href="https://ollama.com/library">An LLM</a><strong><em>↗</em></strong>,</p>
</li>
<li><p><a target="_blank" href="https://solodev.app/installing-node-and-npm-with-nvm">NPM</a>, and</p>
</li>
<li><p><a target="_blank" href="https://solodev.app/installing-langflow-and-flowise">Flowise UI</a>.</p>
</li>
</ul>
<hr />
<h1 id="heading-updating-my-base-system">Updating my Base System.</h1>
<ul>
<li>From the (base) terminal, I update my (base) system:</li>
</ul>
<pre><code class="lang-python">sudo apt clean &amp;&amp; \
sudo apt update &amp;&amp; \
sudo apt dist-upgrade -y &amp;&amp; \
sudo apt --fix-broken install &amp;&amp; \
sudo apt autoclean &amp;&amp; \
sudo apt autoremove -y
</code></pre>
<blockquote>
<p>NOTE: The Ollama LLM manager is already installed on my (base) system.</p>
</blockquote>
<hr />
<h1 id="heading-what-is-flowise">What is Flowise?</h1>
<p>Flowise is an open-source, low-code platform that helps me easily create customized AI workflows and agents. It simplifies the development of AI applications, which usually require many iterations, by allowing for quick changes from testing to production. Chatflows link AI models with various tools like memory, data loaders, and cache, along with over a hundred other integrations including LangChain and LlamaIndex. This setup enables the creation of autonomous agents and assistants that can perform diverse tasks using custom tools. I can build functional agents and OpenAI assistants, or opt for local AI models to save costs. Flowise supports extensions and integrations through APIs, SDKs, and embedded chat features. It is platform-agnostic, meaning Flowise can work with local, open-source AI models in secure, offline environments using local data storage. It is compatible with various platforms and technologies like Ollama, HuggingFace, AWS (Amazon Web Services), Azure, and GCP (Google Cloud Platform), offering flexibility in deployment.</p>
<p><a target="_blank" href="https://docs.flowiseai.com/">https://docs.flowiseai.com/</a><strong><em>↗.</em></strong></p>
<hr />
<h1 id="heading-running-the-flowise-ui">Running the Flowise UI.</h1>
<ul>
<li>From the Terminal, I activate the Flow environment:</li>
</ul>
<pre><code class="lang-plaintext">conda activate Flow
</code></pre>
<ul>
<li>I run Flowise:</li>
</ul>
<pre><code class="lang-plaintext">npx flowise start
</code></pre>
<ul>
<li>I open the Flowise UI in the browser:</li>
</ul>
<p><a target="_blank" href="http://localhost:3000">http://localhost:3000</a></p>
<hr />
<h1 id="heading-example-1-local-chatbot">Example 1: Local Chatbot.</h1>
<ul>
<li>I click <code>Chatflows</code> button in the Main Menu, found on the left of the Flowise UI:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715395278560/0be56ec8-05b3-4998-b39f-a00a47b32d3e.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>At the top-right of the UI, I click the blue <code>+ Add New</code> button.</p>
</li>
<li><p>At the top-right of the <code>Untitled chatflow</code> canvas, I click the Save (💾) button:</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715395791483/c50dfcba-0c97-417a-96dd-3d889c866558.png" alt class="image--center mx-auto" /></p>
<ul>
<li>I name the canvas <code>Local Chatbot</code> and and click the blue <code>Save</code> button:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715396024907/1aef3462-eb94-4c51-a2e5-e6d001f4a592.png" alt /></p>
<blockquote>
<p>NOTE: On the left of the UI is a round, blue button with a plus symbol (+). This is the <code>Add Nodes</code> button. Clicking the <code>Add Nodes</code> button causes the blue button to change to a minus symbol (-) and a drop-down menu to display. Clicking the blue minus symbol (-) closes the drop-down menu. Within the drop-down menu are closed sub-menus (˅) that twirl open (˄) when clicked. Click an open sub-menu to close it again. The convention is to follow the path (&gt;) to a node. All paths start with <code>Add Nodes &gt;</code> and end with nodes being dragged onto the current canvas.</p>
</blockquote>
<ul>
<li>I drag the <code>Add Nodes &gt; Chains &gt; Conversation Chain</code> onto the <code>Local Chatbot</code> canvas:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715396617275/60afd350-98c6-46e7-9339-b97bf6f4efe5.png" alt /></p>
<ul>
<li><p>I drag the <code>Add Nodes &gt; Chat Models &gt; ChatOllama</code> node onto the canvas.</p>
</li>
<li><p>In the <code>ChatOllama</code> node, I I define the following settings:</p>
</li>
</ul>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Setting</td><td>Value</td></tr>
</thead>
<tbody>
<tr>
<td>Base URL *</td><td>http://localhost:11434</td></tr>
<tr>
<td>Model Name *</td><td>codellama:13b-instruct</td></tr>
<tr>
<td>Temperature</td><td>0.7</td></tr>
</tbody>
</table>
</div><ul>
<li>I connect the <code>ChatOllama</code> Output to the <code>Chat Model *</code> Input of the <code>Conversation Chain</code> by clicking-and-dragging a connector from one node to the other node:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715404997726/0128a40a-68b0-41ab-88de-66556bb2a44b.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>I drag the <code>Add Nodes &gt; Memory &gt; Buffer Memory</code> onto the canvas.</p>
</li>
<li><p>I connect the <code>Buffer Memory</code> Output to the <code>Memory *</code> Input of the <code>Conversation Chain</code>:</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715404821620/d9986482-3ae0-441b-8378-19d11bf7aa80.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>I save the changes to the <code>Local Chatbot</code>.</p>
</li>
<li><p>On the right of the UI, I click the round, purple Chat icon to send the "Tell me a joke" message to the LLM:</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715404566968/81bd8c02-8e78-4891-80c3-9a83f23788b3.png" alt class="image--center mx-auto" /></p>
<ul>
<li>At the top-left of the UI, I click the left-pointing arrow to return to the Main Menu.</li>
</ul>
<hr />
<h1 id="heading-example-2-local-rag-chatbot">Example 2: Local RAG Chatbot.</h1>
<ul>
<li><p>From the Main Menu ( on the left of the UI), I click <code>Chatflows</code>.</p>
</li>
<li><p>In the <code>Untitled chatflow</code> canvas, I click the <code>Save</code> button (that looks like a floppy disk.)</p>
</li>
<li><p>I rename the canvas <code>Local RAG Chatbot</code>.</p>
</li>
</ul>
<blockquote>
<p>NOTE: Remember, the <code>Add Nodes</code> button on the left is round, blue, and has a (+) symbol.</p>
</blockquote>
<ul>
<li><p>I drag the <code>Add Nodes &gt; Chains &gt; Conversational Retrieval QA Chain</code> node to the canvas.</p>
</li>
<li><p>I drag the <code>Add Nodes &gt; Chat Models &gt; ChatOllama</code> node to the canvas.</p>
</li>
<li><p>I connect the <code>ChatOllama</code> node Output to the <code>Conversational Retrieval QA Chain</code> node <code>Chat Model</code> Input.</p>
</li>
<li><p>In the <code>ChatOllama</code> node, I define the following settings:</p>
</li>
</ul>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Setting</td><td>Value</td></tr>
</thead>
<tbody>
<tr>
<td>Base URL *</td><td>http://localhost:11434</td></tr>
<tr>
<td>Model Name *</td><td>codellama:13b-instruct</td></tr>
<tr>
<td>Temperature</td><td>0.4</td></tr>
</tbody>
</table>
</div><ul>
<li><p>I drag the <code>Add Nodes &gt; Vector Stores &gt; In-Memory Vector Store</code> node to the canvas.</p>
</li>
<li><p>I connect the <code>In-Memory Vector Store</code> node Output to the <code>Conversational Retrieval QA Chain</code> node <code>Vector Store Retriever</code> Input.</p>
</li>
<li><p>I drag the <code>Add Nodes &gt; Embeddings &gt; Ollama Embeddings</code> node to the canvas.</p>
</li>
<li><p>I connect the <code>Ollama Embeddings</code> node Output to the <code>In-Memory Vector Store</code> node <code>Embeddings</code> Input.</p>
</li>
<li><p>In the <code>Ollama Embeddings</code> node, I define the following settings:</p>
</li>
</ul>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Setting</td><td>Value</td></tr>
</thead>
<tbody>
<tr>
<td>Base URL *</td><td>http://localhost:11434</td></tr>
<tr>
<td>Model Name *</td><td>codellama:13b-instruct</td></tr>
<tr>
<td>Additional Parameters</td><td>Number of GPU: 1</td></tr>
<tr>
<td>Use MMap: On</td></tr>
</tbody>
</table>
</div><ul>
<li><p>I drag the <code>Add Nodes &gt; Document Loaders &gt; Cheerio Web Scraper</code> node to the canvas.</p>
</li>
<li><p>I connect the <code>Cheerio Web Scraper</code> Output to the <code>In-Memory Vector Store</code> node <code>Document</code> Input.</p>
</li>
<li><p>In the <code>Cheerio Web Scraper</code> node, I define the following settings:</p>
</li>
</ul>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Setting</td><td>Value</td></tr>
</thead>
<tbody>
<tr>
<td>URL *</td><td>https://www.langchain.com/langsmith</td></tr>
</tbody>
</table>
</div><ul>
<li><p>I save the changes to the <code>Local RAG Chatbot</code>.</p>
</li>
<li><p>I insert data from the <code>Cheerio Web Scraper</code> into the <code>In-Memory Vector Store</code> by clicking the green DB button at the top-right of the screen.</p>
</li>
<li><p>Once completed, I can chat with the results, e.g. "What is LangSmith?"</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715472743496/7720bce9-3874-4308-8ad4-6929d2c52cca.png" alt class="image--center mx-auto" /></p>
<hr />
<h1 id="heading-the-results">The Results.</h1>
<p>Using Flowise with local LLMs offers a robust and flexible environment for developing advanced, AI-powered applications. By integrating tools like Ollama, and various Flowise components, I can create efficient, cost-effective, local chatbots and AI agents capable of performing a wide range of tasks. This setup enhances data privacy, operational security, and allows for extensive customization to meet specific needs. Whether for business processes, customer service, or personal projects, the ability to build and manage local AI models with Flowise opens up new possibilities for innovation and efficiency in AI application development.</p>
<hr />
<h1 id="heading-in-conclusion">In Conclusion.</h1>
<p>I discovered how Flowise can transform my AI development process and revolutionize my AI workflows with local LLMs.</p>
<p>In today's tech-driven world, efficiency and customization in AI are crucial. That's why I'm excited to share my journey using Flowise with local LLMs, a robust platform that simplifies the creation of AI-powered applications.</p>
<p>With Flowise, I've built powerful local chatbots and AI agents that are not only cost-effective but also prioritize data privacy and operational security. This setup is perfect for businesses and individual developers looking to innovate, while also maintaining control over their data.</p>
<p>From integrating tools like Ollama to utilizing components like Chatflows and Memory Buffers, Flowise has allowed me to seamlessly transition from testing to production, ensuring my AI solutions are both dynamic and scalable.</p>
<p>Have you considered using local LLMs for your AI projects? What has been your biggest challenge in AI development?</p>
<hr />
<p>Until next time: Be safe, be kind, be awesome.</p>
<hr />
<p>#Flowise #AI #AIdevelopment #AIWorkflow #AIApplications #LocalLLMs #Chatbots #MachineLearning #TechInnovation #DataPrivacy #OperationalSecurity</p>
]]></content:encoded></item></channel></rss>