Similar Articles

Articles similar to the selected content.

Domain: news.ycombinator.com Added: 2025-08-13 Status: ✓ Success
news.ycombinator.com
Sam said yesterday that chatgpt handles ~700M weekly users. Meanwhile, I can't even run a single GPT-4-class model locally without insane VRAM or painfully slow speeds. Sure, they have huge GPU cluste...
Similar Articles (10 found)
🔍 66.5% similar
How we run GPT OSS 120B at 500+ tokens per second on NVIDIA GPUs | Baseten Blog
https://www.baseten.co/blog/sota-performance-for-gpt-oss-120b-on-nvidia-gpus/
Day zero model performance optimization work is a mix of experimentation, bug fixing, and benchmarking guided by intuition and experience. This writeu...
🔍 View Similar Articles 🟠 HN
🔍 66.3% similar
Are GPUs Worth It for ML? (exafunction.com)
https://news.ycombinator.com/item?id=32641769
For some reason they focus on the inference, which is the computationally cheap part. If you're working on ML (as opposed to deploying someone else's ...
🔍 View Similar Articles
🔍 64.8% similar
Why DeepSeek is cheap at scale but expensive to run locally
https://www.seangoedecke.com/inference-batching-and-deepseek/
Why DeepSeek is cheap at scale but expensive to run locally Why is DeepSeek-V3 supposedly fast and cheap to serve at scale, but too slow and expensive...
🔍 View Similar Articles 🟠 HN
🔍 63.9% similar
LLM Engineer's Almanac - Workloads
https://modal.com/llm-almanac/workloads
The three types of LLM workloads and how to serve them We hold this truth to be self-evident: not all workloads are created equal. But for large langu...
🔍 View Similar Articles 🟠 HN
🔍 63.2% similar
Scaling PostgresML to 1M Requests per Second (postgresml.org)
https://news.ycombinator.com/item?id=33518443
What is a good algorithm-to-purpose map for ML beginners? Looking for something like "Algo X is good for making predictions when your data looks like ...
🔍 View Similar Articles
🔍 63.1% similar
https://openai.com/index/techniques-for-training-large-neural-networks/
https://openai.com/index/techniques-for-training-large-neural-networks/
Techniques for training large neural networks Large neural networks are at the core of many recent advances in AI, but training them is a difficult en...
🔍 View Similar Articles
🔍 61.5% similar
Laptop-Only LLM: Tune Google Gemma 3 in Minutes (Code Inside)
https://pub.towardsai.net/laptop-only-llm-tune-google-gemma-3-in-minutes-code-inside-d86fa83e0d8f?source=rss----98111c9905da---4
Member-only story googlLaptop-Only LLM: Tune Google Gemma 3 in Minutes (Code Inside) A clean, from-scratch walkthrough (with code) to tune a 270M-para...
🔍 View Similar Articles
🔍 60.9% similar
Extract-0: A specialized language model for document information extraction
https://news.ycombinator.com/item?id=45427634
> the generation of 281,128 augmented examples, from which 1,000 were held out as a benchmark test set. This model is trained on a custom dataset of 2...
🔍 View Similar Articles
🔍 60.9% similar
Owning a $5M data center
https://blog.comma.ai/datacenter/
Owning a $5M data center These days it seems you need a trillion fake dollars, or lunch with politicians to get your own data center. They may help, b...
🔍 View Similar Articles 🟠 HN
🔍 60.3% similar
Understanding LLM Inference Engines: Inside Nano-vLLM (Part 1) - Neutree Blog
https://neutree.ai/blog/nano-vllm-part-1
Understanding LLM Inference Engines: Inside Nano-vLLM (Part 1) Architecture, Scheduling, and the Path from Prompt to Token When deploying large langua...
🔍 View Similar Articles 🟠 HN