Similar Articles

Articles similar to the selected content.

Domain: openai.com Added: 2025-07-13 Status: βœ“ Success
openai.com
Techniques for training large neural networks Large neural networks are at the core of many recent advances in AI, but training them is a difficult engineering and research challenge which requires or...
Similar Articles (10 found)
πŸ” 71.4% similar
Why DeepSeek is cheap at scale but expensive to run locally
https://www.seangoedecke.com/inference-batching-and-deepseek/
Why DeepSeek is cheap at scale but expensive to run locally Why is DeepSeek-V3 supposedly fast and cheap to serve at scale, but too slow and expensive...
πŸ” View Similar Articles 🟠 HN
πŸ” 68.8% similar
Are GPUs Worth It for ML? (exafunction.com)
https://news.ycombinator.com/item?id=32641769
For some reason they focus on the inference, which is the computationally cheap part. If you're working on ML (as opposed to deploying someone else's ...
πŸ” View Similar Articles
πŸ” 67.3% similar
The Principles of Deep Learning Theory (arxiv.org)
https://news.ycombinator.com/item?id=31051540
First, thanks to the publisher and authors for making this freely available! I retired recently after using neural networks since the 1980s. I still s...
πŸ” View Similar Articles
πŸ” 64.5% similar
A Recipe for Training Neural Networks
http://karpathy.github.io/2019/04/25/recipe/
A Recipe for Training Neural Networks Some few weeks ago I posted a tweet on β€œthe most common neural net mistakes”, listing a few common gotchas relat...
πŸ” View Similar Articles 🟠 HN
πŸ” 64.2% similar
Extract-0: A specialized language model for document information extraction
https://news.ycombinator.com/item?id=45427634
> the generation of 281,128 augmented examples, from which 1,000 were held out as a benchmark test set. This model is trained on a custom dataset of 2...
πŸ” View Similar Articles
πŸ” 63.4% similar
Writing an LLM from scratch, part 22 -- finally training our LLM!
https://www.gilesthomas.com/2025/10/llm-from-scratch-22-finally-training-our-llm
Writing an LLM from scratch, part 22 -- finally training our LLM! This post wraps up my notes on chapter 5 of Sebastian Raschka's book "Build a Large ...
πŸ” View Similar Articles 🟠 HN
πŸ” 63.2% similar
My Python code is a neural network (gabornyeki.com)
https://news.ycombinator.com/item?id=40845304
This article doesn't talk much about testing or getting training data. It seems like that part is key. For code that you think you understand, it's be...
πŸ” View Similar Articles
πŸ” 63.1% similar
Ask HN: How can ChatGPT serve 700M users when I can't run one GPT-4 locally?
https://news.ycombinator.com/item?id=44840728
Sam said yesterday that chatgpt handles ~700M weekly users. Meanwhile, I can't even run a single GPT-4-class model locally without insane VRAM or pain...
πŸ” View Similar Articles
πŸ” 62.8% similar
Stretch iPhone to its Limit, a 2GiB Model that can Draw Everything in Your Pocket
https://liuliu.me/eyes/stretch-iphone-to-its-limit-a-2gib-model-that-can-draw-everything-in-your-pocket/
Every year, we have a new iPhone that claims to be faster and better in every way. And yes, these new computer vision models and new image sensors can...
πŸ” View Similar Articles 🟠 HN
πŸ” 62.3% similar
The Bitter Lesson is Misunderstood
https://obviouslywrong.substack.com/p/the-bitter-lesson-is-misunderstood
The Bitter Lesson is Misunderstood Together, the Bitter Lesson and Scaling Laws reveal that the god of Compute we worship is yoked to an even greater ...
πŸ” View Similar Articles 🟠 HN