The Week in Machine Learning – 11th April 2015

Jeff Heaton’s Kickstarter campaign to write the third book in his series on Artificial Intelligence for Humans came to the end of it’s funding period last week, having raised a lot more than the $2,500 target. Volume 3 presents Deep Belief and Neural Networks – follow Jeff on Twitter to keep up to date on progress, support Jeff in his efforts and visit his homepage for more of his work.

One of the more commercial stories of the week was reported by Venturebeat, albeit with limited information, that photo messaging behemoth Snapchat have slowly been developing a research arm, most likely to develop complex algorithms around deep learning for image and video.

The source of the story came from exactly that – ‘a source’, without any official word from Snapchat themselves but the Venturebeat story assembles the puzzle by noting that the company hired former Yahoo Senior Research Scientist, Jia Li, in February.

Slightly unrelated (but stick with it!), I was listening to Eben Upton of Raspberry Pi talk earlier this week about his early memories of programming computers and doing exactly what we used to do at school in the early days of microcomputers, that is to go into a computer store and write the short ’10 print “something rude”, 20 goto 10′ program on each computer before hitting enter on every machine as you leave the store leaving the staff running around in hysterics.

Upton’s story is a way of explaining how computers no longer boot to the command prompt and twenty to thirty years ago, we were all programmers to a degree because the first thing you had to do when turning on a PC is tell it that you did not want to program. The Pi aims to bring computing to more children because of it’s low cost and create more pure hackers once again.

Ok, the story is a bit of a subtopic but it leads on to David Walz’s blog post ‘Deep Learning on the Raspberry Pi’ which created a noteworthy buzz in the quasi circle of Raspberry Pi and Machine Learning. Walz uses Jetpac’s DeepBeliefSDK to install a convolutional neural network on a Raspberry Pi for image recognition further exploring the capability and use of the Raspberry Pi. With this in mind, the Pi will surely also make deep learning more accessible to children with limited funds and access to computers.

Everyone’s favourite Data Science source, KDNuggets this week featured a piece by Computer Science student Nikhil Buduma ‘Computer Vision with Convolutional Neural Networks’ which looks at how convolutional neural nets have become so successful in deep learning and sets out to explain how filters and feature maps work to create the networks.

Another story going around over the last few days has been from MIT Technology Review about a machine learning algorithm which mines more than 16 billion emails. The report is centred around Researchers from Yahoo Labs in Barcelona and the University of Southern California that studied patterns of behaviour in a database of 16 billion emails exchanged over two months, and it must be said that the data use was authorised for research purposes!

The emails did not include spam and the elements which showed clear patterns were the age and gender of the senders and also the timing of the emails. The research allowed the team to build a machine learning algorithm which was able to map out email conversations, including being able to predict when conversations are likely to end, something which seems to have taken a while to explore yet will likely play a critical role in the next generation of email systems.

In the community, Silicon Valley’s enormous Machine Learning Meetup also took place at the Hacker Dojo with Jorge Martinez giving an introduction to distributed R, an extension to R for big data processing. The video is now up on YouTube on the high performance platform which is much faster and able to handle much larger workloads than normal R. The presentation also included an example of how to solve real world ML problems using the Kaggle March Madness dataset.

Hacker News picked up on a blog post by Co-Founder of, Tomasz Malisiewicz ‘Deep Learning vs Probabilistic Graphical Models vs Logic’ which explores the three paradigms which have shaped AI over the last 50 years. Malisiewicz sweeps through our understanding of AI asserting that Logic based systems are void of perception, vital human like understanding.

Whilst machine learning methods have evolved from graphical models to convolutional neural networks, Malisiewicz sees the future as including hybrid approaches to problem solving with both Probabilistic Graphical Models and CNNs.

Lastly, tickets are also still available for the Deep Learning Summit at the Hyatt Regency in Boston, 26-27 May featuring talks from Andrew Ng of Baidu, Claudia Perlich of Dstillery, Hasan Sawaf of Ebay and heaps more.



About Gary Donovan

Machine Learning and Data Science blogger, hacker, consultant living in Melbourne, Australia. Passionate about the people and communities that drive forward the evolution of technology.
Show Buttons
Share On Facebook
Share On Twitter
Share On Linkedin
Share On Pinterest
Share On Stumbleupon
Contact us
Hide Buttons