Overfitting is common-place in machine learning. But, in confining ourselves to graphical data explorations, we are creating a risk for another form of overfitting, notably archetypal overfitting. In this post, I outline why this is a problem, and how to reduce overfitting through better data analysis.
ReadUsing Rust for machine learning still has a ways to go. It's possible to overcome some of the limitations, however, by getting familiar with the lower-level libraries that drive high-performance linear algebra computing, namely, SIMD, BLAS, and Lapack.
ReadGenerative adversarial neural networks (GANs) are all the rage. I explore if we can generate cookie recipes with GAN architectures, doing so in a way that's neither NLP nor computer vision.
ReadMachine learning, the field upon which the vast majority of artificial intelligence systems depend on, has tremendous potential to do good if harnessed correctly. When used properly, algorithms can allow for better timed phone calls, and conversations directly related to a voter's interests, and hopefully, less robocalls in the middle of dinner.
Read