Similar Articles

Articles similar to the selected content.

Domain: thume.ca Added: 2025-07-13 Status: ✓ Success
thume.ca
Production Twitter on One Machine? 100Gbps NICs and NVMe are fast In this post I’ll attempt the fun stunt of designing a system that could serve the full production load of Twitter with most of the fe...
Similar Articles (10 found)
🔍 72.2% similar
Production Twitter on one machine? 100Gbps NICs and NVMe are fast (thume.ca)
https://news.ycombinator.com/item?id=34291191
I'm going to preface this criticism by saying that I think exercises like this are fun in an architectural/prototyping code-golf kinda way. However, I...
🔍 View Similar Articles
🔍 62.1% similar
Scaling PostgresML to 1M Requests per Second (postgresml.org)
https://news.ycombinator.com/item?id=33518443
What is a good algorithm-to-purpose map for ML beginners? Looking for something like "Algo X is good for making predictions when your data looks like ...
🔍 View Similar Articles
🔍 60.3% similar
Stretch iPhone to its Limit, a 2GiB Model that can Draw Everything in Your Pocket
https://liuliu.me/eyes/stretch-iphone-to-its-limit-a-2gib-model-that-can-draw-everything-in-your-pocket/
Every year, we have a new iPhone that claims to be faster and better in every way. And yes, these new computer vision models and new image sensors can...
🔍 View Similar Articles 🟠 HN
🔍 57.8% similar
Why DeepSeek is cheap at scale but expensive to run locally
https://www.seangoedecke.com/inference-batching-and-deepseek/
Why DeepSeek is cheap at scale but expensive to run locally Why is DeepSeek-V3 supposedly fast and cheap to serve at scale, but too slow and expensive...
🔍 View Similar Articles 🟠 HN
🔍 56.9% similar
Writing an LLM from scratch, part 22 -- finally training our LLM!
https://www.gilesthomas.com/2025/10/llm-from-scratch-22-finally-training-our-llm
Writing an LLM from scratch, part 22 -- finally training our LLM! This post wraps up my notes on chapter 5 of Sebastian Raschka's book "Build a Large ...
🔍 View Similar Articles 🟠 HN
🔍 56.5% similar
Are GPUs Worth It for ML? (exafunction.com)
https://news.ycombinator.com/item?id=32641769
For some reason they focus on the inference, which is the computationally cheap part. If you're working on ML (as opposed to deploying someone else's ...
🔍 View Similar Articles
🔍 56.3% similar
Making Python use all those Cores and RAM
http://mcottondesign3.appspot.com/post/ahRzfm1jb3R0b25kZXNpZ24zLWhyZHIRCxIEQmxvZxiAgID0y8COCQw
Making Python use all those Cores and RAM It is cheap and easy to build a machine with 8/16 cores and 32GB of RAM. It is more complicated to make Pyth...
🔍 View Similar Articles
🔍 54.9% similar
Redis is fast - I'll cache in Postgres
https://dizzy.zone/2025/09/24/Redis-is-fast-Ill-cache-in-Postgres/
There are books & many articles online, like this one arguing for using Postgres for everything. I thought I’d take a look at one use case - using Pos...
🔍 View Similar Articles 🟠 HN
🔍 54.6% similar
Ask HN: How can ChatGPT serve 700M users when I can't run one GPT-4 locally?
https://news.ycombinator.com/item?id=44840728
Sam said yesterday that chatgpt handles ~700M weekly users. Meanwhile, I can't even run a single GPT-4-class model locally without insane VRAM or pain...
🔍 View Similar Articles
🔍 54.6% similar
how AWS S3 serves 1 petabyte per second on top of slow HDDs
https://bigdata.2minutestreaming.com/p/how-aws-s3-scales-with-tens-of-millions-of-hard-drives
how AWS S3 serves 1 petabyte per second on top of slow HDDs Learn how Amazon built the backbone of the modern web that scales to 1 PB/s and 150M QPS o...
🔍 View Similar Articles 🟠 HN