First, thanks to the publisher and authors for making this freely available!
I retired recently after using neural networks since the 1980s. I still spend at least 10 hours a week keeping up with DL, ...
Similar Articles (10 found)
π 74.3% similar
This article doesn't talk much about testing or getting training data. It seems like that part is key.
For code that you think you understand, it's be...
π 73.9% similar
I'm curious why we seem convinced that this is a task that is possible or something worthy of investigation.
I've worked on language models since 2018...
π 72.5% similar
For some reason they focus on the inference, which is the computationally cheap part. If you're working on ML (as opposed to deploying someone else's ...
π 72.1% similar
> the generation of 281,128 augmented examples, from which 1,000 were
held out as a benchmark test set.
This model is trained on a custom dataset of 2...
π 71.9% similar
A Peek at Trends in Machine Learning
Have you looked at Google Trends? Itβs pretty cool β you enter some keywords and see how Google Searches of that ...
π 71.3% similar
The Bitter Lesson is Misunderstood
Together, the Bitter Lesson and Scaling Laws reveal that the god of Compute we worship is yoked to an even greater ...
π 70.8% similar
Software 2.0
I sometimes see people refer to neural networks as just βanother tool in your machine learning toolboxβ. They have some pros and cons, th...
π 70.7% similar
Yes you should understand backprop
When we offered CS231n (Deep Learning class) at Stanford, we intentionally designed the programming assignments to ...
π 70.7% similar
Yes you should understand backprop
When we offered CS231n (Deep Learning class) at Stanford, we intentionally designed the programming assignments to ...
π 70.1% similar
Does the Bitter Lesson Have Limits?
Recently, βthe bitter lessonβ is having a moment. Coined in an essay by Rich Sutton, the bitter lesson is that, βg...