<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Llm on Zenithia</title><link>/tags/llm/</link><description>Recent content in Llm on Zenithia</description><generator>Hugo</generator><language>en-nz</language><lastBuildDate>Fri, 08 May 2026 14:41:45 +0300</lastBuildDate><atom:link href="/tags/llm/index.xml" rel="self" type="application/rss+xml"/><item><title>Running Qwen3.6-35B-A3B on a gaming PC (Written by human)</title><link>/posts/local-qwen/</link><pubDate>Fri, 08 May 2026 14:41:45 +0300</pubDate><guid>/posts/local-qwen/</guid><description>&lt;p>My interest in LLMs for coding assist tooling started before the virality of GPT. Back then I tried this extension in vscode called Tabnine which provided &amp;ldquo;smart&amp;rdquo; auto-completion while writing code. It was heavy, vscode felt sluggish and the fans whirred louder, but the thing felt like magic sometimes. Half the time, the suggested completions were completely off, but the other half it felt like it was reading my mind. I enjoyed this experience because there was no friction between accepting or ignoring these completions and it sped up going from idea to implementation.&lt;/p></description></item></channel></rss>