<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Skills on Blog | gardlt.io</title><link>http://blog.gardlt.io/tags/skills/</link><description>Recent content in Skills on Blog | gardlt.io</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Fri, 20 Mar 2026 13:28:39 -0500</lastBuildDate><atom:link href="http://blog.gardlt.io/tags/skills/index.xml" rel="self" type="application/rss+xml"/><item><title>Token Consumption with GitHub Copilot Skills</title><link>http://blog.gardlt.io/posts/understanding-llm-token-consumption/</link><pubDate>Fri, 20 Mar 2026 13:28:39 -0500</pubDate><guid>http://blog.gardlt.io/posts/understanding-llm-token-consumption/</guid><description>&lt;h1 id="why-your-ai-skills-should-be-cli-first-a-token-cost-analysis"&gt;Why Your AI Skills Should Be CLI-First: A Token Cost Analysis&lt;/h1&gt;
&lt;p&gt;Many AI skills are written so the model does all the heavy lifting: reading raw configuration files, running shell commands, and formatting output line by line. While this works, it comes at a cost: token consumption that scales poorly.&lt;/p&gt;
&lt;p&gt;This post walks through a concrete analysis of a workspace setup skill that illustrates this problem and shows how shifting deterministic logic into a CLI binary can cut token consumption by over 95%.&lt;/p&gt;</description></item></channel></rss>