I made it to Bellingham this spring for LinuxFest Northwest, and I get why this event has been earning its reputation for 27 years.
This is not a conference where corporations show up to scan badges and hand out tote bags. No lead capture, no forced networking. Just a few hundred people who actually care about Linux, gathered in a walkable waterfront city on a genuinely beautiful spring weekend. The crowd skews toward students, hobbyists, longtime contributors, and the kind of sysadmins who have strong opinions about their chosen terminal emulator. The conversations are real, and the community is tight.
I was there representing CIQ and the Rocky Linux project alongside R. Leigh Hennig, one of Rocky’s cofounders, and a few others from the CIQ team. Having people who are genuinely part of the project made a difference on the floor. We talked migrations, automations, home lab setups, and what people are actually running in production. Good questions from people who know their stuff.
The social side matched the rest of it. Friday night at Beach Cat Brewery brought together folks from RESF, Microsoft, Fedora, CentOS, and a mix of attendees and volunteers. Saturday was dinner at Brandywine Kitchen and drinks at The Den. The kind of cross-community connection that does not happen on Slack.
I gave a talk called “Escaping the End-of-Life Nightmare: Lessons from the Linux Graveyard,” aimed at anyone who has ever inherited a server running something two major versions past EOL. It drew around 25 people and led to some good conversations afterward. When the recording goes up I will share it here.
If you have never been to LFNW, put it on your list. Bellingham is a great place to spend a spring weekend, and the community there is worth showing up for.
Bootc in Production: What It Actually Looks Like | Fedora Podcast Ep. 054
Episode 54 of the Fedora Podcast is out, and this one digs into something I am genuinely curious about for my own homelab: what does bootc look like when someone is actually running it day to day, not just kicking the tires?
https://youtu.be/6_ZzEIvNySc
James Harmison joined me to walk through how he uses bootc across a pretty wild range of contexts, from his personal laptops and a stripped-down SteamOS-style couch gaming rig, to swapping kernels in OpenShift for AMD GPU support at work. His setup replaces the traditional dotfiles repo entirely, bakes kernel modules directly into the image so DKMS and akmod are completely gone, and runs CI on his homelab to rebuild the image on every commit. It is the kind of thing that sounds overcomplicated until he explains it, and then it sounds like the only sane way to manage a system.
We also spent time on Chunkah, an upstream project working on smarter container image chunking for bootc. James was one of the first people to stress test it on a large image and found some real problems with the packing algorithm. He helped kick off the conversations that are now shaping the next release. That is the kind of quiet upstream contribution that does not always get attention but genuinely moves the project forward.
If you are curious about bootc and want to hear from someone who has been living in it for a couple of years, this episode is a solid starting point.
This is the kind of conversation that fits right in the middle of my broader interest in where Linux infrastructure is heading, both for homelabs and production environments.
GPU-Ready in Minutes: Running AI on Azure with RLC Pro AI
Getting from the Azure Marketplace to actual inference used to mean a day of driver archaeology. This webinar with Brian Dawson, Damon Knight, and Hugo from Microsoft Azure’s core engineering team is about what happens when that problem is mostly solved.
We started where most of these conversations start, which is the pain. Damon has done this setup process more times than anyone should have to, across Ubuntu, Red Hat, SUSE, and Rocky, and his description of what a typical deployment looks like before RLC Pro AI is the kind of thing that gives sysadmins flashbacks. Figuring out which drivers go with which CUDA version, whether Torch compiled for the GPU or the CPU, whether you need the CUDA toolkit on top of CUDA, and whether that first reboot is the last reboot. Brian made the point that even his conservative estimate of time saved turned out to be underselling the problem after he talked to analysts who work in this space full time.
The demos cut through a lot of that. Damon showed a fresh RLC Pro AI instance in Azure spinning up Jupyter Notebook on an H100, validating GPU access with Nvidia SMI, and running tensor math in Python, with most of the time going to Docker setup rather than anything AI-specific. The second demo was more involved: a full RAG chatbot using Microsoft’s Phi-4 model and the Open Platform for Enterprise AI reference implementation, running on an RTX preview host. Upload a PDF, ask it questions, get answers grounded in your document. Start to finish, about ten minutes.
Hugo brought a useful perspective from the Azure side. Because Azure VM types are homogeneous across regions, partners can optimize once and trust that the results hold everywhere. That consistency matters a lot when you are trying to build something repeatable. He also gave a preview of what is coming on the hardware side, including the RTX 6000 Pro hitting general availability soon and ongoing work on GB200 and GB300 deployments.
If you have been waiting for a reason to actually try running your own AI instead of just reading about it, this is a pretty good starting point. Subscribe to The IT Guy Show on YouTube and follow along at itguyeric.com for more.
I wrote this deployment guide for CIQ, walking through how to get RLC Pro up and running on Google Cloud Marketplace from start to a live VM.
Google was one of the first hyperscalers to back the Rocky Linux project, and CIQ has partnered with them to deliver optimized Rocky Linux images built specifically for Google Compute Engine. RLC Pro is the enterprise layer on top of that: Long-Term Support for specific minor versions, FIPS 140-3 validated packages, and an actual vendor to call when something breaks.
The post covers what you get with a subscription, why LTS minor version pinning matters for compliance frameworks like FedRAMP and CMMC, and how to get from the Marketplace listing to a running VM in a few steps.
I wrote this guide for CIQ covering how to get RLC Pro running on AWS, from the Marketplace listing to a live EC2 instance with LTS and FIPS available.
The post makes the case that most teams standing up Enterprise Linux on AWS spend too much time configuring things that should have shipped with the image. RLC Pro skips that. The AMI comes pre-configured with CIQ repository access, and enabling LTS pinning or FIPS 140-3 validated packages is a single step after launch.
I covered the full picture: what Long-Term Support actually means for compliance and QA cycles, why FIPS 140-3 matters for FedRAMP and CMMC-scoped workloads, and how Enterprise Linux binary compatibility means your existing Ansible playbooks and automation carry over without changes.
This is part of a series I produced at CIQ on deploying RLC Pro across major cloud providers, making enterprise Linux accessible wherever your infrastructure lives.
Flock to Fedora 2026 is heading back to Prague this June, and on episode 53 of the Fedora Podcast, Noah and I sat down with Justin Wheeler and Emma from the Fedora design team to talk through everything contributors and community members need to know.
For anyone who hasn’t been, Flock is the Fedora Project’s annual contributor conference — not a sales event, not a product launch, just the place where a global community of mostly volunteers gets together once a year to do real work, set direction, and remind themselves why they got into open source in the first place. Justin, who serves as Fedora Community Architect at Red Hat, described it well: it’s where you go to fuel up on the energy that carries you through the rest of the year. That tracks with my own experience. Some of the best conversations in open source happen at a table at midnight over bad food and good drinks.
The decision to return to Prague for a second consecutive year was a deliberate one. Travel anxiety into the US is real for a significant portion of the global Fedora community right now, and the Prague venue worked exceptionally well last year — accessible by train and plane, flexible space, and existing relationships with the hotel. Justin was transparent that this isn’t a new permanent model, just a practical call that let the team focus on improving the conference itself rather than rebuilding logistics from scratch in a new city.
The design theme for Flock 2026 is Alchemy, and Emma walked us through the research behind it. Prague’s history with alchemy traces back to Emperor Rudolph II, who brought Europe’s top alchemists to the city during his reign in the 1500s in pursuit of the philosopher’s stone. The design team ran a community vote on potential themes and alchemy won, which gave them a rich visual direction — medieval manuscripts, heraldic imagery, the Fedora four foundations rendered as elemental colors radiating from a stone. Fedora’s Flock mascot Kulora returns dressed as an alchemist this year, searching not for the philosopher’s stone but for Fedora itself. Emma is also reworking the design process to build a shared asset library so newer contributors can pick up design tickets without starting from a blank canvas.
As for the schedule: the conference runs June 14th through 16th, Sunday through Tuesday. Sunday is “Day Zero” — an offline workshop day with no AV and no plenary sessions, just hands-on working time for teams and SIGs to actually get things done face to face. Day One and Day Two follow a mirror structure: one opens with a single plenary track then breaks into three simultaneous recorded breakouts after lunch, the other does it in reverse. Both days will be live streamed so remote attendees can follow along. Registration is free, but you need to sign up in advance to get food, swag, and a conference t-shirt — and based on what Emma described about the alchemy branding, you’re going to want the t-shirt. Financial aid is also available for those who need it.
You can register now at fedoraproject.org/flock2026 and hang out in the #flock:fedoraproject.org room on Matrix if you have questions. Hope to see some of you in Prague.
I wrote this deployment guide for CIQ covering how to get RLC Pro running on Microsoft Azure, from the Marketplace listing to a production-ready Enterprise Linux environment.
The post opens with a question worth asking: how much time does your team spend configuring Enterprise Linux after launch? LTS repos, FIPS packages, support credentials. None of that is the work you spun up the instance to do. RLC Pro on Azure addresses that directly. The image ships pre-configured with CIQ repository access, and enabling Long-Term Support or FIPS 140-3 validated packages is one step after connecting your subscription.
I covered what each enterprise capability actually means in practice: LTS minor version pinning for teams that cannot rebuild and revalidate every time upstream ships a new release, FIPS 140-3 for regulated workloads in government, defense, healthcare, and financial services, and full Enterprise Linux binary compatibility for teams migrating existing automation and applications.
This is the third post in a series I produced at CIQ on deploying RLC Pro across major cloud providers, with AWS and Google Cloud covered in companion guides.
From Fresh Install to AI Inference in Under 4 Minutes
Getting a GPU box ready for AI workloads is way harder than it should be, and we proved that live on the CIQ Webinar Series on April 2nd.
I brought in Brian Dawson from CIQ product management, Damon Knight (CIQ’s resident AI nerd and automation engineer), and Zach from AI Insight Solutions for an honest conversation about where most organizations actually are when it comes to GPU infrastructure. The short answer: a lot of people started on cloud, found it expensive, bought hardware, and are now figuring out that running AI on prem is a whole different problem.
The demo said everything. We ran a fresh Ubuntu setup through the full stack, including Nvidia drivers, CUDA, the CUDA toolkit, cuDNN libraries, and PyTorch, with Damon copy-pasting commands he spent months refining. Time to first inference: around 13 and a half minutes, and roughly 10 of those were just prerequisites. Compare that to RLC Pro AI, which ships with the validated stack already baked in. Same hardware, same demo code, first tokens in about 3 minutes and 30 seconds.
What makes that difference real at scale is validation. It is not just that the stack installs faster. It is that CIQ actually tested the dependency combinations, recompiled PyTorch with the right flags, and confirmed the GPU is doing the work instead of silently falling back to CPU. Damon’s point about checking Nvidia SMI and seeing 0% GPU utilization hit close to home for anyone who has been there.
If you are building or managing AI infrastructure, this one is worth watching.
Subscribe to The IT Guy Show on YouTube and follow along at itguyeric.com for more Linux, open source, and infrastructure content.
CIQ Portal Is Live: Access and Deploy CIQ Products on Your Own Terms
I wrote the launch post announcing the CIQ portal going live at portal.ciq.com, covering what it is, who it’s for, and how to go from registration to your first download in under a minute.
The portal is the central hub for everything CIQ: product downloads, license keys, documentation, and team permission management across the full portfolio. That includes RLC Pro, RLC Pro Hardened, RLC Pro AI, RLC+, CIQ Bridge, Fuzzball, Ascender Pro, and Warewulf Pro. The post walks through the catalog structure, the Solution Stacks view for teams who aren’t sure where to start, and a step-by-step example deploying RLC Pro AI for a sovereign AI workload in a private data center.
One thing I wanted to get across was how genuinely fast the self-service flow is. Free tier, 30-day trial, or full enterprise deployment, you can get moving without talking to anyone first.
Product accessibility is something I care about, and this post was a good opportunity to show how a well-designed portal can lower the barrier to evaluating enterprise infrastructure software.
How to Migrate from RHEL to RLC Pro Without Re-Architecting
I wrote this guide for CIQ covering how to convert a running RHEL system to RLC Pro, without rebuilding servers or re-certifying your application stack.
The core of the post is Enterprise Linux binary compatibility. Because Rocky Linux builds from the same source RPMs as RHEL, the kernel ABI stays stable, shared libraries match, and your applications, Ansible playbooks, and automation carry over without modification. What actually changes is repository URLs, package signatures, and OS branding. That’s it. The migrate2rlc script handles the conversion on a live system, and the post walks through the full process: environment assessment, collecting CIQ portal credentials, running the migration, and validating the result.
I also covered what you get on the other side. LTS version pinning, FIPS 140-3 validated packages on the .2/.6/.10 releases, commercial support tiers, and IP indemnification. The migration is the starting point, not the finish line.
Migration content sits at the intersection of technical depth and business case, which is the kind of writing I enjoy most and where my sysadmin background actually earns its keep.
I co-wrote this recap with Michael Young and R. Leigh Hennig covering CIQ’s presence at SCaLE 23x in Pasadena, and I was there in person as a speaker.
The post covers the full picture: 300+ badge scans at the CIQ booth, conversations with folks from Microsoft, Meta, Fedora, Red Hat, AlmaLinux, RESF, and more, a sponsored hardening workshop, and Community Game Night where Michael and I got absolutely dismantled at Uno No Mercy. The Asterisk community connection was an interesting thread too, with real follow-up conversations about adding Rocky Linux as a supported distribution.
My contribution to the program was a talk called “From Bash to Burnout: Staying Sane in a 24/7 Tech World.” It was a personal one. Behind every uptime badge is a tired sysadmin, and I wanted to have an honest conversation about protecting your time and energy without losing your love for the work. It drew about 20 to 30 attendees and sparked some good discussion afterward.
Burnout in tech is something I have thought about for a long time, and getting to talk about it in front of a room full of sysadmins and infrastructure people at SCaLE felt like exactly the right venue.
What Enterprise Linux Support Actually Means (And Why It Matters)
Rocky Linux is everywhere, and for good reason. But community is not always enough, and this webinar is where Brady Dibble and I got into exactly what changes when you need a vendor behind your Linux stack.
Brady is CIQ’s director of product management and honestly one of the clearest thinkers I have talked to on the topic of enterprise Linux. We started at the 40,000 foot view: enterprises are not planning infrastructure in years, they are planning in decades. And that long-term stability calculus gets complicated fast when AI is forcing you to chase the latest kernel features and GPU support at the same time.
A big chunk of our conversation was about the difference between RLC Plus and RLC Pro, and when each one makes sense. RLC Plus is free, vendor-validated, and great for homelabs, startups, and anyone who needs more than community but is not running regulated production workloads. RLC Pro is where you get LTS, FIPS 140-3 compliant modules, indemnification, and actual support when things go sideways. Brady also made a point that stuck with me: you do not have to choose upfront. You can start on RLC Plus and move a node to Pro with basically a single command.
We also got into what indemnification actually means in practice, how FIPS compliance differs from FIPS certification, and why CIQ intentionally separated itself from the Rocky Linux project governance. That last one surprises a lot of people, and it came up at SCaLE just before we recorded this.
If you are running Linux in production or evaluating your options, this one is worth the watch. Subscribe to The IT Guy Show on YouTube and follow along at itguyeric.com for more.
This one was personal. I gave this talk at SCaLE 23 in Pasadena as part of Open Source Career Day, and I will be honest, I almost did not write it because it meant putting some very uncomfortable things on a slide.
In 2025 I lost a job I had held for five and a half years in a seven-minute call. Over the next 96 days I worked overnight shifts at a gas station, interviewed on no sleep during what should have been my sleep hours, and spent a lot of time figuring out who I was when the job that had become my identity was gone. I also said yes to teaching Linux administration at Johnson County Community College, which I almost turned down, and eventually landed at CIQ. Both of those came through my network, not job boards.
The talk covers what burnout actually looks like before you realize you are in it, the patterns that get technical people there faster than most, and some practical steps I used to stabilize and rebuild. I talk about the hero trap, context switching, the identity trap, and what a blameless postmortem on my own situation taught me. There is also a question I started asking people in my life that has saved me more arguments than I can count, and I share it in the talk because it is genuinely that useful.
This is not a polished keynote. The slide deck had some technical issues live, I was processing some heavy personal news the morning of the talk, and it shows. But I think that is part of why it landed the way it did.
I gave this talk at SCaLE 23x in Pasadena as part of Open Source Career Day, and it is probably the most personal thing I have ever put on a stage.
2025 was a hard year. I lost a job I had tied my entire identity to, spent 96 days working overnight shifts at a gas station, and went through some personal upheaval I was not prepared for. I did not plan to turn any of that into a conference talk. But when I was asked to speak, it felt dishonest to stand up in front of a room full of sysadmins and IT folks and pretend I had it all figured out.
The talk covers what burnout actually looks like before you realize it is happening, the patterns that lead there, the signals I missed in my own life, and some practical things that genuinely helped. Not productivity hacks. Real stuff, like building a daily rhythm when everything falls apart, maintaining a few relationships with no agenda, and learning to separate your identity from your employer before a reorg does it for you.
I also talk about what I did right after the layoff and what I would do differently. Spoiler: I should have taken time to grieve before immediately hunting for the next thing.
If any of this sounds familiar, the recording is worth 40 minutes of your time.
Burnout is not a personal failing. It is a warning sign. I needed someone to say that to me in 2025, so I figured I would say it to a room full of people instead.
Proactive Linux Security: How RLC Hardened Goes Beyond Patching
Reactive security is table stakes at this point, and it is not keeping up. I sat down with Nathan Blackham, Brady Dibble, and Sultan Alof from CIQ to talk about what it actually looks like to build defense into the operating system from day one.
The conversation started with something that hits close to home for anyone managing Linux at scale: CVE volume has exploded. The Linux kernel became its own CVE numbering authority a few years back, which means every commit gets evaluated for security relevance. That is good in theory, but in practice it means patching has become a near full-time job, and mission usually wins when you have to choose between a reboot window and keeping production up. Brady put it plainly: compliant and secure are not the same thing, and the gap between them is where things go wrong.
That is the whole premise behind RLC Hardened. It ships with a set of code-level changes designed to close off entire attack vectors before a CVE even exists for them. That includes a hardened glibc build, hardened malloc, tightened OpenSSH that drops the XZ library linkage after startup, stronger password hashing via yescrypt, and a few other targeted modifications. The gold image comes out at 95% STIG compliance on day one, which is an actual measured number, not marketing copy.
The standout piece is LKRG, the Linux Kernel Runtime Guard. Sultan walked through how it works and it is genuinely different from anything else in this space. Rather than scanning for known exploit signatures, it watches for the end goal of an exploit, things like a process claiming root credentials it never earned through legitimate channels. The demo showed it killing an exploit attempt cold, and the performance overhead is around 2.5%, which is low enough that most workloads will not notice it. It also ships signed for secure boot, which matters a lot in regulated environments.
If you are running Linux in production and security compliance is part of your world, this one is worth your time. Subscribe to The IT Guy Show on YouTube and follow along at itguyeric.com for more.
Sovereign AI Infrastructure: Training, Inference, and Exploration in One Workflow
This was my first CIQ webinar and I walked away genuinely excited, which is not something I say lightly. Dave Godlove and Jonathan Sparks showed off Fuzzball’s new service endpoints feature and what it unlocks for teams that want to run their own AI without handing their data to someone else.
The core problem Jonathan laid out is something a lot of teams are quietly dealing with. Training, fine-tuning, building a RAG index, and running inference have traditionally been treated as completely separate workflows, often on separate tools. Fuzzball has always been a batch processing engine for HPC workloads, but service endpoints change the picture. Now a single workflow can include batch jobs that ingest and vectorize your documentation alongside a persistent AI service you can actually talk to, all managed through the same interface.
The demo Dave walked through made this concrete. Wolfgang (a colleague who built the workflow) set up a local AI stack inside Fuzzball that pulls documentation from a private GitHub repo, builds a vector database using local recall, and serves it through a RAG-enabled model you can query directly in the browser. No data leaves your environment. No external API calls. The model knew about Fuzzball because it had been trained on Fuzzball’s own docs, running entirely on CIQ’s infrastructure. Dave also showed off a virtual desktop service and a Jupyter notebook environment running through the same setup, both accessible with a single click from the web interface.
What got me was the scope controls. You can restrict a service to just yourself, share it with a team, or open it to your whole organization, and that applies to the AI model the same as anything else. The use cases kept expanding as we talked: internal documentation bots, sovereign coding assistants, VDI for remote research teams, agentic workflows connected to Slack or GitHub. All of it on hardware you control.
If you are curious what self-hosted AI actually looks like in practice, this one is worth your time. Subscribe to The IT Guy Show on YouTube and follow along at itguyeric.com.
TuxCare Enterprise Support Vertical Explainer Series
These three short videos were part of a vertical marketing campaign I produced at TuxCare, each one aimed at a different regulated industry dealing with the same core problem: Linux infrastructure that has to stay secure, compliant, and online, with less budget and fewer people than the job actually requires.
I wrote the scripts and appeared on camera for all three. The goal was to translate TuxCare’s enterprise support value into language that resonated with each audience specifically. Fintech teams thinking about PCI-DSS and SOX. Public sector teams navigating FedRAMP, FISMA, and FIPS 140-3 mandates. Healthcare IT teams carrying HIPAA obligations while managing legacy systems that simply cannot go down.
Same product, same core message, three completely different buying contexts. Getting that right without just swapping out the compliance acronyms was the actual challenge.
Vertical messaging is a skill I enjoy. Understanding what keeps a CISO at a hospital up at night versus what keeps a compliance officer at a federal agency up at night, and then writing to that specifically, is where technical background and marketing instinct actually come together.
Every six months or so, I found myself in the same spot. A new release was out, the features were solid, the engineering work was impressive… and yet the same questions kept coming back from sales, customers, and even internal teams.
“What actually changed?” “What should I care about?” “How do I explain this without reading release notes out loud?”
That’s the gap the “What’s New” decks were meant to close. Not as a one-off presentation, but as a repeatable way to tell the product story in a way that made sense to real humans — especially the ones who didn’t live and breathe the roadmap every day.
The Challenge
The problem wasn’t that the releases were bad. Quite the opposite. The problem was volume.
Enterprise software ships on a predictable cadence. New features, enhancements, deprecations, tech previews — all of it lands on a schedule whether anyone is ready or not. By the time a release goes GA, teams are already juggling conferences, roadmap conversations, internal launches, and whatever fire happens to be burning that week.
What I kept seeing was this:
Release notes were thorough, but overwhelming
Sales teams wanted a clear story, not a changelog
Customers cared about impact, not every individual bullet point
And on top of that, the audience was never just one type of person. A single update had to make sense to:
A practitioner watching live
A sales engineer skimming slides before a customer call
A customer catching the replay two weeks later
Without a consistent way to frame what mattered most, every release risked becoming noise — even when the work underneath it was genuinely exciting.
The Approach
After a few of these updates, it became obvious that the problem wasn’t the content — it was the format.
Every product team I worked with had the same raw ingredients: feature lists, engineering notes, roadmap context, and a rough idea of who the audience was. What kept breaking down was the handoff between knowing what shipped and explaining why anyone should care.
So instead of treating each update as a brand-new presentation, I started thinking in terms of a system.
The goal was simple:
Start with why this matters, not just what shipped
Curate a small number of changes that actually moved the needle
Design once, then reuse across multiple audiences and channels
That meant forcing some discipline. Not every feature made the cut. Not every slide was technical. And every deck had to work in more than one context — live presentations, recorded webinars, internal enablement, and on-demand viewing.
I leaned on Pragmatic Marketing principles here, but quietly. Personas shaped what made it into the deck. Market problems dictated framing. The framework stayed in the background while the story stayed front and center.
Over time, this approach turned into a repeatable model I could apply regardless of product, release size, or company. The examples changed. The structure didn’t.
Key Deliverables
Once the structure was in place, the actual outputs became much easier — and more consistent — to produce. Instead of reinventing the wheel every release, each update followed the same basic anatomy, even as the content changed.
At a high level, the system produced:
A Modular “What’s New” Deck
Each deck followed a predictable flow:
Context first: what kind of release this is and who it’s for
Top themes: a small number of changes that actually matter to buyers
Selective depth: enough detail to be credible without overwhelming
What’s next: timelines, previews, or things to keep an eye on
This made the deck usable whether someone watched live, skimmed slides before a call, or jumped to a specific section on replay.
Live and Recorded Presentations
The same deck was designed to work on-camera. That meant:
Slides that supported narration instead of replacing it
Clear transitions for live discussion and Q&A
Natural breakpoints for chapter markers and replays
In practice, this allowed a single narrative to power live webinars, recorded updates, and shorter clips without rewriting the story each time.
Reusable Enablement Assets
Because the structure stayed consistent, pieces of the deck could be lifted and reused:
Sales teams pulled slides directly into customer conversations
Partners used trimmed versions for their own briefings
Marketers repurposed sections into blog posts, videos, or follow-up content
One concrete example of this approach in action was a recurring “What’s New” update for Red Hat Enterprise Linux, where the same core deck supported a live YouTube presentation, internal enablement, sales presentations, and on-demand viewing — all without maintaining separate versions.
The deliverables mattered, but the real value was consistency. Once teams knew what to expect, the updates became easier to consume, easier to share, and easier to trust.
Impact
The most noticeable change wasn’t a single metric — it was how people interacted with the updates.
Before this approach, release content tended to feel disposable. Watch it live, forget it, move on. Once the structure stabilized, the updates started behaving more like a reference point than an event.
Here’s what shifted over time:
Clearer conversations: Sales and field teams stopped asking what they should focus on and started asking deeper follow‑up questions.
Better reuse: The same deck showed up in customer calls, internal trainings, and partner briefings without needing to be rewritten.
Higher engagement: Live sessions held attention longer, and recorded views stayed relevant well past the initial release window.
More confidence: Stakeholders trusted the updates because they knew what they were getting — context, prioritization, and honesty about what mattered.
In at least one recurring release series, this approach contributed to sustained growth in live attendance and replay views, while reducing the amount of one‑off enablement work needed each cycle.
More importantly, it changed expectations. Product updates stopped feeling like a fire drill and started feeling like a known rhythm — something teams could plan around instead of react to.
Conclusion & CTA
What started as a way to survive release cycles eventually became one of the most reliable and successful product marketing tools in my toolkit.
The lesson wasn’t about slides. It was about respect — for the audience’s time, for the complexity of the product, and for the reality that most people don’t live inside a roadmap. When updates are framed with context, intent, and restraint, people don’t just consume them. They trust them.
This same model has worked across different products, different release cadences, and different audiences because the fundamentals don’t change:
Start with why
Curate instead of catalog
Design for reuse
Treat enablement as a system, not a one-off
If you’re building product updates, enablement content, or launch narratives and finding that they’re not landing the way you expect, this is usually where things break down.
If you’d like to talk about how this approach could apply to your product, your team, or your release motion, I’m always happy to compare notes.
Breqwatr Customer Story: Ending Scanner Overload with TuxCare Radar
I produced this customer story for TuxCare, including the written case study and the short video summary below. Breqwatr builds OpenStack clouds for industries where infrastructure genuinely cannot fail: freight and logistics, government deployments, financial services. Their vulnerability management problem was not a lack of scanning. It was too much of it.
Conventional scanners were returning thousands of flagged issues, many without enough context to act on. Engineers were spending hours parsing reports instead of running infrastructure. And with SOC 2 Type 2 completed and ISO 27001 in progress, audit conversations were getting harder to manage, not easier.
TuxCare Radar changed that. Deployed in minutes via Ansible with no performance impact, Radar gave the Breqwatr team immediate fleet-wide visibility into what actually needed attention. As CTO Michael Knox put it, install the agent and away you go. No dedicated SecOps team required, no weeks of configuration, just a clean dashboard showing patch status across the entire fleet. Audit conversations shifted from defensive to confident, because the team could now show not just what vulnerabilities existed but which ones actually mattered in context.
Customer stories are some of my favorite content to produce. Getting the technical detail right while keeping the narrative accessible to a compliance officer and a CTO at the same time is a challenge worth solving.
Session 0 in The Last Parsec- Savage Worlds Solo RPG
When I decided to play a solo Savage Worlds campaign, I wasn’t just adding another bullet (pun intended) to my TTRPG résumé—I was signing up for a one-man voyage across the known (and unknown) cosmos.
As a player, I took on this solo experience for two main reasons:
SWADE Rulebook
To internalize the full Savage Worlds ruleset—Edges, Bennies, dramatic tasks, social conflict—without relying on another player or GM to drive the pace.
To experiment with different tools, play around with tech, and put my homelab to use hosting Foundry, running AI, and exploring integrations like Savaged.us, the SWADE core rules, the Science Fiction Companion, and The Last Parsec one-sheets.
Enter Savage Worlds’ The Last Parsec: a sci-fi setting defined by exploration, corporate entanglements, and frontier justice across hundreds of star systems. It’s open, it’s modular, and it fits perfectly with the vibe I wanted: Han Solo colliding with Tony Stark, wrapped in a layer of snark and high-stakes space adventure.
What Is the Savage Worlds RPG?
Savage Worlds is a fast, flexible RPG system that emphasizes streamlined play without sacrificing depth. It’s built to support multiple genres—from swords-and-sorcery to high-flying space adventure—and focuses on keeping the action moving without bogging players down in overly complex rules. Instead of strict class systems, Savage Worlds uses attributes, skills, Edges, and Hindrances to shape your character, offering a lot of freedom for roleplay and customization.
How Savage Worlds Compares to D&D
Compared to Dungeons & Dragons, combat in Savage Worlds tends to be quicker, more dynamic, and less bogged down in lengthy turn cycles. The system’s streamlined mechanics keep the action moving, which makes every fight feel cinematic rather than procedural. Character creation is also far less constrained by predefined classes—you build your character’s abilities from the ground up using attributes, skills, Edges, and Hindrances, which means you can create exactly the kind of hero (or rogue) you want without forcing them into a rigid mold.
That flexibility was a big draw for me when deciding to launch this Savage Worldssolo RPG set in The Last Parsec. If I’m going to invest my time in a game, I want the freedom to experiment and the ability to pivot the story based on the decisions I make. Video games can be fun and immersive, but even the best RPGs tend to have fixed narratives and limited dialogue trees. In a tabletop RPG like Savage Worlds, especially in a solo campaign, there’s no invisible wall stopping me from trying something unexpected—the “engine” adapts to me, not the other way around. It’s a more collaborative, story‑driven experience where the rules bend to serve the adventure instead of forcing the adventure to fit the rules.
The Last Parsec – A Savage Worlds Sci‑Fi Setting
The Last Parsec Primer
The Last Parsec builds directly on the foundation of Savage Worlds Adventure Edition (SWADE)—the current core ruleset—and the Science Fiction Companion, which expands SWADE with gear, starship rules, alien races, and other sci-fi-specific mechanics. SWADE delivers the core mechanics for fast, flexible play, while the Science Fiction Companionadds space travel rules, ship design, advanced gear, and alien species. Rather than offering a fixed story arc like many D&D modules, The Last Parsec provides locations, factions, and adventure seeds for players and GMs (or in my case, the AI) to shape collaboratively.
Solo RPG Character Creation in Savage Worlds
Creating a character in Savage Worlds starts with five core attributes—Agility, Smarts, Spirit, Strength, and Vigor—each beginning at a baseline and improved with a limited pool of points. These attributes shape the foundation of your abilities, while your skills—linked directly to those attributes—determine what you can actually do in play, from piloting a starship to bluffing your way past a suspicious customs officer.
One of the most distinctive elements is the use of Hindrances. These aren’t just disadvantages—they’re roleplaying hooks that define your character’s flaws, debts, or quirks. Taking Hindrances earns you extra points to invest back into attributes, skills, or Edges, which are special advantages that set your character apart. It’s a trade‑off system that rewards leaning into your character’s weaknesses as much as their strengths.
Unlike class‑based systems such as Dungeons & Dragons, Savage Worlds doesn’t box you into a predefined archetype. You can mix and match abilities freely to create a concept that fits the story you want to tell. For my solo RPG campaign in The Last Parsec, that meant building James Conway as a quick‑witted, overconfident pilot with both the skill to fly into trouble and the charm to talk his way back out. In Savage Worlds, the system flexes to match your vision instead of forcing your vision to fit the system.
Meet James Conway
James Conway
James Conway is the central character of my Savage Worldssolo RPG campaign set in The Last Parsec—a human starship captain who lives at the intersection of charm, risk‑taking, and sheer piloting talent. He’s the kind of man who can bluff his way into a high‑stakes card game, fly through an asteroid field without flinching, and then talk his way out of the docking fees when he lands.
Attributes: His Agility (d8) gives him the reflexes needed for tight maneuvers in the cockpit or a quick draw with a sidearm. Spirit (d8) fuels his confidence, his ability to read a room, and the unshakable calm he projects even when the odds are stacked. Smarts (d6) keep him sharp enough to navigate both hyperspace lanes and tricky social situations, while Vigor (d6) lets him shrug off the occasional hard landing or bar fight. Strength (d4) is his weakest point—Conway’s never been one for brute force when a smooth word or clever trick will do.
Skills: His bread and butter are Piloting (d8), Shooting (d8), and Persuasion (d8)—skills honed through years of scraping by on the fringe of The Last Parsec setting. Notice (d6) keeps him alert to danger, Stealth (d6) lets him slip away when needed, and Electronics (d6) makes him a passable hand with ship systems and security panels. Gambling (d4) speaks to his love of calculated risk, while Repair (d4), Athletics (d4), and Common Knowledge (d4) round him out as a jack‑of‑many‑trades, master of a few.
Edges:
Ace – Conway is a master pilot, gaining bonuses to Piloting rolls and able to push his ship beyond normal limits without breaking it… most of the time.
Charismatic – A natural people‑person (or at least, able to fake it), Conway gets a +2 to Persuasion, helping him cut deals, diffuse tension, or talk his way past the occasional checkpoint.
Hindrances:
Overconfident (Major) – If it looks possible, Conway assumes he can pull it off—often biting off more than he can chew.
Obligation (Minor) – Someone out there is keeping tabs on him, and they expect their due.
Quirk – Flirt (Minor) – Conway has a tendency to turn on the charm at inopportune times, whether it’s with a bartender, a customs officer, or the person trying to shoot him.
In short, James Conway is a man who thrives on danger, dances around disaster, and somehow—by skill, luck, or pure stubbornness—usually comes out ahead. Picture the swagger of RDJ’s Tony Stark fused with Han Solo’s roguish instincts, and you’ve got the measure of him—a perfect fit for the high‑stakes, fast‑play universe of Savage Worlds in The Last Parsec.
Building the Thunderbird
In Savage Worlds, ships are built much like characters. You start with a chassis—the base framework of your vessel—and then customize it by spending a limited number of modification slots. Each mod changes how the ship performs, adding capabilities or improving existing ones, whether that’s better defenses, faster engines, or more advanced sensors.
For my Savage Worldssolo RPG campaign in The Last Parsec, the Thunderbird began as a Light Freighter—a balanced choice offering decent speed, cargo capacity, and room for a small crew. From there, I outfitted it with:
Cargo Hold – For legitimate freight… and less legitimate side jobs.
Living Quarters – Compact bunks, a galley, and just enough comfort for long hauls.
Reinforced Hull – Boosting Toughness so she can take a hit and keep flying.
Stealth System – A built‑in edge for avoiding unwanted attention, giving –2 to enemy sensor rolls.
Sensor Suite – +2 to Notice rolls when scanning or using ship systems.
Pulse Laser Turret – 3d6 damage, AP 2, perfect for a little persuasion in the void.
And then there’s Vera, the Thunderbird’s onboard AI. More than just a navigation system, Vera manages diagnostics, assists in gunnery, and delivers sharp‑tongued commentary whenever Conway’s decision‑making gets… creative. In a universe as unpredictable as The Last Parsec, a ship like the Thunderbird isn’t just transportation—it’s survival.
Behind the Scenes: Tools & Workflow
ChatGPT: My co-pilot for narrative prompts, NPC dialogue, and rules clarifications. Having a responsive AI means I can keep the game’s pace moving, improvise new scenes or encounters, and clarify rules on the fly without breaking immersion.
Savaged.us: This site handles all the heavy lifting for character creation and advancement. It keeps track of my attributes, skills, Edges, Hindrances, gear, and credits, while also making it easy to export polished character sheets for reference during play.
Foundry VTT: Hosted on my homelab, Foundry is my digital campaign hub. I use it for logging sessions, managing dice rolls, and, in the future, running maps and tactical encounters. Its modular nature means I can integrate Savage Worlds systems and customize it to my needs as the campaign evolves.
Session 0 Wrap-Up
With character and ship complete, Conway is primed for his first outing in The Last Parsec. This Savage Worldssolo RPG campaign kicks off with the Thunderbird armed, stealth‑capable, and home to Vera, an AI as likely to sass him as save him. The stage is set—the next ping could start something profitable… or catastrophic.
Coming in Session 1: Dockside at Karnak’s Refuge, Conway’s comms ping with a call from Creeg—a “friend” with a job that’s supposedly easy money. Easy, of course, being a relative term. Follow along for Session 1, share your plot ideas, and join me in charting a solo course through The Last Parsec.
If you enjoyed this and want to explore more campaign logs, check out the Tabletop RPG archives for other adventures I’ve run and played, including more Savage Worlds content and other solo RPG experiments.
Run AI Locally on Fedora with Ollama: Live Ops 005
I wanted to run AI locally on Fedora using an Nvidia GPU I had sitting in my Dell PowerEdge R730 — and an hour and a half later, I actually got it working. Here’s what the road looked like.
The hardware is a Tesla K80, which is basically two 12GB VRAM cards in one chassis and cost me next to nothing off Amazon. The plan was straightforward: get the Nvidia drivers working on Fedora Server 41, pull down Ollama in a Podman container, add Open WebUI on top for a ChatGPT-like interface, and wire it all up behind NGINX Proxy Manager. I had notes. I had a cheat sheet. I was ready.
What I wasn’t ready for was Fedora’s relationship with Nvidia drivers. The Tesla K80 needs the 470.xx legacy driver, not the current one, and figuring that out took most of the first hour. Between installing the wrong driver version, conflicts with packages left over from previous attempts, and the driver service not loading cleanly, it took a force rebuild of akmods and dracut before nvidia-smi finally came back with output. AMD cards just work. Nvidia is a project.
Once the GPU was cooperating, getting Ollama running locally turned out to be easier than the container approach I’d planned, since the GCR image path for the Podman version kept returning errors. Ollama installed directly on the host, Llama 3.2 pulled down without complaint, and I had a working local model to talk to inside of a few minutes. DeepSeek R1 8B came along for the ride as a second model to compare. Open WebUI came up in a Podman container, and after a solid round of “why won’t this connect,” the fix turned out to be a single :Z flag on the volume mount — SELinux doing what SELinux does.
By the end of the stream, Open WebUI was running, both models were loaded, and I had a local AI setup I could actually use. The NGINX reverse proxy was still showing a bad gateway when I had to call it for the day, but that’s a problem for next session. On the to-do list: get RAG working so I can feed it my own blog posts as a local knowledge base, and eventually stop paying OpenAI for what I can run at home.
Watch the full stream above, and if you want to follow along as the homelab build continues, subscribe to the channel or grab the audio version of the main show at podcast.itguyeric.com.
The goal was simple: build a reusable Fedora golden image homelab template I can clone whenever I need a new server. Nothing about it was simple.
The plan was to use Image Builder inside Cockpit to create a base Fedora server image, then store the resulting QCOW2 file on my unRAID box so I can spin up new VMs on demand. I had packages already lined up in the blueprint: Crony, Cockpit, NetworkManager, bash completion, VIM enhanced, TuneD, and a few others. The Ansible service account was configured, IPv6 disabled at the kernel level, and automatic partitioning set so I’m not hand-holding every deployment. It was supposed to be a quick stream before the Fedora podcast.
What actually happened was a solid hour of fighting permissions issues with the welder group, a blueprint package that didn’t exist in Fedora the way it does in RHEL, and Image Builder refusing to build ISO images while happily producing QCOW2 files without complaint. Once I got the QCOW2 built and pulled down, I had to convert it to an IMG file using qemu-img convert just to get unRAID to accept it, then resize the image to 32 gigs because the VM came up with a 4.2 gig root partition and nowhere to grow. ext4 on an unRAID VM turned out to be less cooperative than I’d hoped.
The good news: by the end of the stream I had a booting Fedora VM with bash completion, VIM enhanced, and Cockpit already running, SSH keys in place, and the performance profile defaulted to virtual-guest automatically. The less good news: I’m probably moving VM deployment over to my Fedora-based ITG02 hypervisor from Live Ops 003, where QCOW2 management is a lot less painful. The Ansible playbook to automate the whole build-convert-deploy pipeline is the next piece, and that’s where things should start getting cleaner.
Watch the full stream above, and if you want to follow along as the homelab build continues, subscribe to the channel or grab the audio version of the main show at podcast.itguyeric.com.
A Dell PowerEdge R730 that’s been sitting in the rack collecting dust finally has a job: and this stream is the whole messy process of giving it one.
The goal was straightforward: install RHEL 8 on bare metal and get the R730 running as a RHEL hypervisor homelab host. But rather than just grabbing a stock ISO, I went straight to Red Hat Image Builder on console.redhat.com, because if you spent years doing TPMM work for RHEL, you use Image Builder. The hosted build queue had other ideas though, and what should have taken a few minutes turned into a 40-minute wait that only resolved after I kicked off a second build to shame the first one into finishing.
Once the ISO was finally in hand, getting the Dell to actually boot from it proved to be its own adventure. Wrong boot order, a RAM upgrade the server needed a moment to accept, and the classic “it wasn’t broken, just needed more time to transfer over the network” diagnosis courtesy of guest heckler Nate, the Iron Sysadmin. The install itself went smoothly once RHEL 8.10 got its turn: custom partitioning, a dedicated Ansible service account, kdump disabled, and TuneD set to virtual-host profile so the system optimizes for running VMs rather than general workloads.
After the install, I wired the box into Red Hat Insights, which is included with any RHEL subscription and immediately flagged a handful of advisories, two of which were auto-remediable via an Ansible playbook without ever leaving the Insights UI. Cockpit got enabled, the virtualization host package group went in, and I spun up a test VM to confirm everything was working. I wrapped the stream with a local Image Builder instance installed on the hypervisor itself, laying the groundwork for generating golden VM templates for Fedora, CentOS Stream, and RHEL without depending on the hosted service.
Watch the full stream above, and if you want to follow along as the homelab build continues, subscribe to the channel or grab the audio version of the main show at podcast.itguyeric.com.
The WordPress migration homelab project that started as a quick detour turned into a two-hour deep dive — because nothing in the homelab ever goes exactly to plan.
If you caught Live Ops 001, you know we got the Matrix server almost running — right up until an SSL certificate issue stopped us cold. The problem? itguyeric.com was hosted on Namecheap’s EasyWP platform, which doesn’t let you export your own certificates. The fix was obvious: move the site home. And with a $94 renewal coming up in about a week, there was really no reason not to.
The plan was to spin up a fresh Fedora server VM, deploy WordPress in a Podman container, migrate the site using the All-in-One WP Migration plugin, and wire it all up behind NGINX Proxy Manager with a Let’s Encrypt cert that covers both itguyeric.com and matrix.itguyeric.com. Clean, tidy, two birds one stone. In practice: wrong ISO architecture, a MariaDB port conflict, a file permissions hang, and a Matrix federation API that kept failing despite the federation tester saying everything was fine. Classic.
By the end of the stream, WordPress was running locally and the Matrix server was showing green check marks on the self-check — but federation between accounts still wasn’t cooperating. Eric called it to go take care of family, with a promise to sort out the remaining certificate and domain config issues in a follow-up.
The WordPress migration homelab journey continues — subscribe so you don’t miss Live Ops 003 when it drops. Audio version of the main show is always at podcast.itguyeric.com.
I fired up a livestream to do something I’ve been putting off way too long: bringing my self-hosted Matrix server back from the dead, live, in real time, with no safety net.
The stream started simple enough — spin up a fresh VM, throw CentOS 7 on it to match the original environment (don’t @ me), and use the Matrix Docker Ansible Deploy project to get everything running again. The goal was straightforward: restore the old setup, get the Matrix bridges reconnected to Discord and Slack, and then figure out how to modernize the whole thing on something like Fedora or even OpenShift down the road.
What actually happened was a four-plus-hour deep dive into DNS records, SSL certificate headaches, and Namecheap doing Namecheap things. By the end, federation checks were almost passing — but a CNAME where an A record needed to be, and a certificate tied to a shared hosting account I couldn’t extract cleanly, kept the finish line just out of reach. Classic homelab energy.
The good news: the Matrix server itself was provisioned, configured, and responding. The bad news: itguyeric.com’s WordPress site is hosted via Namecheap EasyWP, which doesn’t let you pull your own certs — so the next logical step is migrating the WordPress site home and tackling the SSL setup properly.
If you’re running your own Matrix homeserver or thinking about it, this stream is a pretty honest look at what that process actually involves — including the parts that go sideways. No polished edits, no second takes.
Watch the full stream above, and if you want to follow along in real time next time, join the IT Guy Show room on Matrix or hop into the Discord. You can also subscribe to the audio version of the main show at podcast.itguyeric.com.