<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>LLM on Blowfish</title><link>https://huggingaha.github.io/tags/llm/</link><description>Recent content in LLM on Blowfish</description><generator>Hugo -- gohugo.io</generator><language>zh-cn</language><managingEditor>huggingaha@gmail.com (时影)</managingEditor><webMaster>huggingaha@gmail.com (时影)</webMaster><copyright>© 2026 时影</copyright><lastBuildDate>Wed, 01 Oct 2025 00:00:00 +0000</lastBuildDate><atom:link href="https://huggingaha.github.io/tags/llm/index.xml" rel="self" type="application/rss+xml"/><item><title>LoRA：无憾之选-Thinking Machines</title><link>https://huggingaha.github.io/blogs/llm/lora-without-regret-thinkingmachines/</link><pubDate>Wed, 01 Oct 2025 00:00:00 +0000</pubDate><author>huggingaha@gmail.com (时影)</author><guid>https://huggingaha.github.io/blogs/llm/lora-without-regret-thinkingmachines/</guid><description/></item><item><title>MBTI-in-Thoughts：基于MBTI的LLM行为调控与结构化思维框架</title><link>https://huggingaha.github.io/blogs/llm/mbti-in-thoughts-psychologically-enhanced-ai-agents/</link><pubDate>Tue, 30 Sep 2025 00:00:00 +0000</pubDate><author>huggingaha@gmail.com (时影)</author><guid>https://huggingaha.github.io/blogs/llm/mbti-in-thoughts-psychologically-enhanced-ai-agents/</guid><description/></item><item><title>译文-Small Leak Can Sink a Great Ship—Boost RL Training on MoE with 𝑰𝒄𝒆𝑷𝒐𝒑!</title><link>https://huggingaha.github.io/blogs/llm/small-leak-can-sink-a-great-ship-boost-rl-training-on-moe-with-icepop/</link><pubDate>Tue, 23 Sep 2025 00:00:00 +0000</pubDate><author>huggingaha@gmail.com (时影)</author><guid>https://huggingaha.github.io/blogs/llm/small-leak-can-sink-a-great-ship-boost-rl-training-on-moe-with-icepop/</guid><description/></item><item><title>击败 LLM 推理中的非确定性-Thinking Machines</title><link>https://huggingaha.github.io/blogs/llm/effective-context-engineering-for-ai-agents-claude/</link><pubDate>Sun, 14 Sep 2025 00:00:00 +0000</pubDate><author>huggingaha@gmail.com (时影)</author><guid>https://huggingaha.github.io/blogs/llm/effective-context-engineering-for-ai-agents-claude/</guid><description/></item><item><title>Why Language Models Hallucinate——解构LLM幻觉</title><link>https://huggingaha.github.io/blogs/llm/llm-hallucinate/</link><pubDate>Sun, 07 Sep 2025 00:00:00 +0000</pubDate><author>huggingaha@gmail.com (时影)</author><guid>https://huggingaha.github.io/blogs/llm/llm-hallucinate/</guid><description/></item><item><title>LLM幻觉不可避免？</title><link>https://huggingaha.github.io/blogs/llm/llm-hallucinations-taxonomy/</link><pubDate>Sat, 16 Aug 2025 00:00:00 +0000</pubDate><author>huggingaha@gmail.com (时影)</author><guid>https://huggingaha.github.io/blogs/llm/llm-hallucinations-taxonomy/</guid><description/></item><item><title>上下文学习中的“位置效应”</title><link>https://huggingaha.github.io/blogs/agent/bicl-prompt-positional-bias/</link><pubDate>Fri, 01 Aug 2025 00:00:00 +0000</pubDate><author>huggingaha@gmail.com (时影)</author><guid>https://huggingaha.github.io/blogs/agent/bicl-prompt-positional-bias/</guid><description/></item><item><title>GSPO：组序列策略优化</title><link>https://huggingaha.github.io/blogs/llm/gspo-rl-llm/</link><pubDate>Mon, 28 Jul 2025 00:00:00 +0000</pubDate><author>huggingaha@gmail.com (时影)</author><guid>https://huggingaha.github.io/blogs/llm/gspo-rl-llm/</guid><description/></item><item><title>kimi-k2技术报告</title><link>https://huggingaha.github.io/blogs/llm/kimi-k2-technical-report/</link><pubDate>Tue, 22 Jul 2025 00:00:00 +0000</pubDate><author>huggingaha@gmail.com (时影)</author><guid>https://huggingaha.github.io/blogs/llm/kimi-k2-technical-report/</guid><description/></item><item><title>Context Engineering综述解读</title><link>https://huggingaha.github.io/blogs/agent/context-engineering-survey/</link><pubDate>Sat, 19 Jul 2025 00:00:00 +0000</pubDate><author>huggingaha@gmail.com (时影)</author><guid>https://huggingaha.github.io/blogs/agent/context-engineering-survey/</guid><description/></item></channel></rss>