This essay is full of questions about the philosophy of science vs engineering vs design. It tries to explain where AI fits in, and discusses the false certainty that powerful tools like these give us. Machine learning/artificial intelligence/deep learning/data science/call it what you want is full of pitfalls because it is so powerful. There’s not much actionable in it, but as someone who’s been pushing the story of automated retraining and continuous deployment to squeeze out performance out of ML models, it certainly gave me a lot to think about.
A great framing of the set of technologies that everyone is so excited about but which in many ways doesn’t yet live up to expectations. Sometimes, even people deeply embedded in the Silicon Valley/San Francisco tech world get stuck in the “We’re not facegoopplemazforce, so we don’t have the data, and can’t gain from ML.” This is a mistake. The value is in the domain specific low cost solutions. Echoes a lot of ideas in Tim O’Reilly’s What’s the Future, which I’m reading right now.
Hanson’s views about emulating brains and how this kind of technology would affect labor markets are fascinating. The moral side of it is not too different from one of the questions raised by Weyl’s book mentioned above - assuming these ems are indistinguishable from people, is it fair/moral to bring them to the world and take a cut of their earnings? How do their needs and desires balance out? We’ve seen some of this tech roll out between this being recorded 7 years ago and now, but I’m sure that within our lifetimes we’ll see us get a lot closer than Hanson suggests.
By now most people who deal with machine learning or natural language processing in some way are familiar with the King - Man + Woman = Queen example from word2vec. Here, Schmidt brings up a similarly relatable example: What word is between duck and soup? What words sit between the middle point and those extremes? Iterating these chains brings up really cool patterns.
I work in a tangential field to NLP, and constantly deal with experts in the field, whose explanations can go over my head. Any time I’m pointed to good resources on the topic, I get happy. This is one of them. Additionally, for a good intro to word vectors in general, and how to compute them, you can check out Understanding word vectors, a Jupyter notebook by Allison Parrish.
This is a neat article on humans merging with computers. Altman’s take is that this has already started happening (our phones are an extension of ourselves), and the trend is accelerating (double exponential of improving hardware and more people doing AI research). It reminded me of Minsky: “The serious problems come from having little experience with machines of such complexity that we are not yet prepared to think effectively about them.” But we’ve been augmenting ourselves for a long time. There’s a striking scene in The Name of The Rose where William explains the use for eye-glasses to other incredulous monks. The glasses quickly become a central object in the story. We’ve been augmenting our bodies with technology for centuries.
Sharing this one for the novelty. It is nearly impossible to tell which one is a human and which one is a machine, which is mind boggling.
To close on a good note, here is a cool project - training crows to collect cigarette butts, with computer vision and creativity!
Using short code snippets, and a bag of mixed nuts as the motivating example, Bozonier explains a complex probabilty concept.
Probably one of the best features in Spotify. Pretty cool story of how it came about.
Another one that I can’t comment much on, but want to share.
Ross Goodwin keeps pumping them out. Non-sensical, bizarre, and a bit over-dramatized, but interesting.
Another in depth look at modern solutions to artificial intelligence and problem solving. As usual, Karpathy makes complex ideas understandable, this time using OpenAI’s Gym to play Pong.
The application of these simple machine learning concepts keep impressing me more and more. Autoencoders are a very simple idea. If anything, click through to see the side-by-side video.
TLDR, no one noticed until the guy revealed it.
The machine learning inspired cousin to Fizz Buzz in Too Much Detail. You shouldn’t miss this one, even if your knowledge of ML is minimal.
While I enjoy reading about the breakthrough techniques in deep learning, applied machine learning, with weird and fun objectives and non-standard datasets is much more exciting.
Somehow, the dots connect in the future.
A good explanation of neural networks by example. It is amazing how quickly the toy problem of learning a couple of weights, basic high school math, becomes untractable.
If you haven’t yet, go read Part 1.