Similar Articles

Articles similar to the selected content.

Domain: thinkingmachines.ai Added: 2025-09-10 Status: βœ“ Success
thinkingmachines.ai
Defeating Nondeterminism in LLM Inference Reproducibility is a bedrock of scientific progress. However, it’s remarkably difficult to get reproducible results out of large language models. For example,...
Similar Articles (10 found)
πŸ” 63.8% similar
Why DeepSeek is cheap at scale but expensive to run locally
https://www.seangoedecke.com/inference-batching-and-deepseek/
Why DeepSeek is cheap at scale but expensive to run locally Why is DeepSeek-V3 supposedly fast and cheap to serve at scale, but too slow and expensive...
πŸ” View Similar Articles 🟠 HN
πŸ” 62.5% similar
LLM Engineer's Almanac - Workloads
https://modal.com/llm-almanac/workloads
The three types of LLM workloads and how to serve them We hold this truth to be self-evident: not all workloads are created equal. But for large langu...
πŸ” View Similar Articles 🟠 HN
πŸ” 60.9% similar
https://openai.com/index/techniques-for-training-large-neural-networks/
https://openai.com/index/techniques-for-training-large-neural-networks/
Techniques for training large neural networks Large neural networks are at the core of many recent advances in AI, but training them is a difficult en...
πŸ” View Similar Articles
πŸ” 60.8% similar
The Principles of Deep Learning Theory (arxiv.org)
https://news.ycombinator.com/item?id=31051540
First, thanks to the publisher and authors for making this freely available! I retired recently after using neural networks since the 1980s. I still s...
πŸ” View Similar Articles
πŸ” 59.3% similar
Understanding LLM Inference Engines: Inside Nano-vLLM (Part 1) - Neutree Blog
https://neutree.ai/blog/nano-vllm-part-1
Understanding LLM Inference Engines: Inside Nano-vLLM (Part 1) Architecture, Scheduling, and the Path from Prompt to Token When deploying large langua...
πŸ” View Similar Articles 🟠 HN
πŸ” 58.8% similar
TimesFM: Time Series Foundation Model for time-series forecasting (github.com/google-research)
https://news.ycombinator.com/item?id=40297946
I'm curious why we seem convinced that this is a task that is possible or something worthy of investigation. I've worked on language models since 2018...
πŸ” View Similar Articles
πŸ” 58.7% similar
Writing an LLM from scratch, part 22 -- finally training our LLM!
https://www.gilesthomas.com/2025/10/llm-from-scratch-22-finally-training-our-llm
Writing an LLM from scratch, part 22 -- finally training our LLM! This post wraps up my notes on chapter 5 of Sebastian Raschka's book "Build a Large ...
πŸ” View Similar Articles 🟠 HN
πŸ” 58.6% similar
Extract-0: A specialized language model for document information extraction
https://news.ycombinator.com/item?id=45427634
> the generation of 281,128 augmented examples, from which 1,000 were held out as a benchmark test set. This model is trained on a custom dataset of 2...
πŸ” View Similar Articles
πŸ” 57.8% similar
Why AGI Will Not Happen β€” Tim Dettmers
https://timdettmers.com/2025/12/10/why-agi-will-not-happen/
If you are reading this, you probably have strong opinions about AGI, superintelligence, and the future of AI. Maybe you believe we are on the cusp of...
πŸ” View Similar Articles 🟠 HN
πŸ” 57.2% similar
My Python code is a neural network (gabornyeki.com)
https://news.ycombinator.com/item?id=40845304
This article doesn't talk much about testing or getting training data. It seems like that part is key. For code that you think you understand, it's be...
πŸ” View Similar Articles