Production Twitter on One Machine? 100Gbps NICs and NVMe are fast
In this post I’ll attempt the fun stunt of designing a system that could serve the full production load of Twitter with most of the fe...
Similar Articles (10 found)
🔍 72.2% similar
I'm going to preface this criticism by saying that I think exercises like this are fun in an architectural/prototyping code-golf kinda way.
However, I...
🔍 62.1% similar
What is a good algorithm-to-purpose map for ML beginners? Looking for something like "Algo X is good for making predictions when your data looks like ...
🔍 60.3% similar
Every year, we have a new iPhone that claims to be faster and better in every way. And yes, these new computer vision models and new image sensors can...
🔍 57.8% similar
Why DeepSeek is cheap at scale but expensive to run locally
Why is DeepSeek-V3 supposedly fast and cheap to serve at scale, but too slow and expensive...
🔍 56.9% similar
Writing an LLM from scratch, part 22 -- finally training our LLM!
This post wraps up my notes on chapter 5 of Sebastian Raschka's book "Build a Large ...
🔍 56.5% similar
For some reason they focus on the inference, which is the computationally cheap part. If you're working on ML (as opposed to deploying someone else's ...
🔍 56.3% similar
Making Python use all those Cores and RAM
It is cheap and easy to build a machine with 8/16 cores and 32GB of RAM. It is more complicated to make Pyth...
🔍 54.9% similar
There are books & many articles online, like this one arguing for using Postgres for everything. I thought I’d take a look at one use case - using Pos...
🔍 54.6% similar
Sam said yesterday that chatgpt handles ~700M weekly users. Meanwhile, I can't even run a single GPT-4-class model locally without insane VRAM or pain...
🔍 54.6% similar
how AWS S3 serves 1 petabyte per second on top of slow HDDs
Learn how Amazon built the backbone of the modern web that scales to 1 PB/s and 150M QPS o...