Similar Articles

Articles similar to the selected content.

Domain: thume.ca Added: 2025-07-13 Status: βœ“ Success
thume.ca
Production Twitter on One Machine? 100Gbps NICs and NVMe are fast In this post I’ll attempt the fun stunt of designing a system that could serve the full production load of Twitter with most of the fe...
Similar Articles (10 found)
πŸ” 72.2% similar
Production Twitter on one machine? 100Gbps NICs and NVMe are fast (thume.ca)
https://news.ycombinator.com/item?id=34291191
I'm going to preface this criticism by saying that I think exercises like this are fun in an architectural/prototyping code-golf kinda way. However, I...
πŸ” View Similar Articles
πŸ” 62.1% similar
Scaling PostgresML to 1M Requests per Second (postgresml.org)
https://news.ycombinator.com/item?id=33518443
What is a good algorithm-to-purpose map for ML beginners? Looking for something like "Algo X is good for making predictions when your data looks like ...
πŸ” View Similar Articles
πŸ” 60.3% similar
Stretch iPhone to its Limit, a 2GiB Model that can Draw Everything in Your Pocket
https://liuliu.me/eyes/stretch-iphone-to-its-limit-a-2gib-model-that-can-draw-everything-in-your-pocket/
Every year, we have a new iPhone that claims to be faster and better in every way. And yes, these new computer vision models and new image sensors can...
πŸ” View Similar Articles 🟠 HN
πŸ” 57.8% similar
Why DeepSeek is cheap at scale but expensive to run locally
https://www.seangoedecke.com/inference-batching-and-deepseek/
Why DeepSeek is cheap at scale but expensive to run locally Why is DeepSeek-V3 supposedly fast and cheap to serve at scale, but too slow and expensive...
πŸ” View Similar Articles 🟠 HN
πŸ” 56.9% similar
Writing an LLM from scratch, part 22 -- finally training our LLM!
https://www.gilesthomas.com/2025/10/llm-from-scratch-22-finally-training-our-llm
Writing an LLM from scratch, part 22 -- finally training our LLM! This post wraps up my notes on chapter 5 of Sebastian Raschka's book "Build a Large ...
πŸ” View Similar Articles 🟠 HN
πŸ” 56.5% similar
Are GPUs Worth It for ML? (exafunction.com)
https://news.ycombinator.com/item?id=32641769
For some reason they focus on the inference, which is the computationally cheap part. If you're working on ML (as opposed to deploying someone else's ...
πŸ” View Similar Articles
πŸ” 56.3% similar
Making Python use all those Cores and RAM
http://mcottondesign3.appspot.com/post/ahRzfm1jb3R0b25kZXNpZ24zLWhyZHIRCxIEQmxvZxiAgID0y8COCQw
Making Python use all those Cores and RAM It is cheap and easy to build a machine with 8/16 cores and 32GB of RAM. It is more complicated to make Pyth...
πŸ” View Similar Articles
πŸ” 55.9% similar
[Revised] You Don’t Need to Spend $100/mo on Claude Code: Your Guide to Local Coding Models
https://www.aiforswes.com/p/you-dont-need-to-spend-100mo-on-claude
[Revised] You Don’t Need to Spend $100/mo on Claude Code: Your Guide to Local Coding Models What you need to know about local model tooling and the st...
πŸ” View Similar Articles 🟠 HN
πŸ” 55.9% similar
Owning a $5M data center
https://blog.comma.ai/datacenter/
Owning a $5M data center These days it seems you need a trillion fake dollars, or lunch with politicians to get your own data center. They may help, b...
πŸ” View Similar Articles 🟠 HN
πŸ” 54.9% similar
Redis is fast - I'll cache in Postgres
https://dizzy.zone/2025/09/24/Redis-is-fast-Ill-cache-in-Postgres/
There are books & many articles online, like this one arguing for using Postgres for everything. I thought I’d take a look at one use case - using Pos...
πŸ” View Similar Articles 🟠 HN