Showing posts with label NEURAL NETWORK. Show all posts
Showing posts with label NEURAL NETWORK. Show all posts

Monday, 25 April 2022

Introduction to GraphSAGE in Python by @maximelabonne via @TDataScience

Scaling Graph Neural Networks to billions of connections.

I like this which is very clear and easy to use and understand. Great code examples.

Wednesday, 18 August 2021

3 Reasons Why You Should Use Linear Regression Models Instead of Neural Networks by Terence Shin via @kdnuggets

While there may always seem to be something new, cool, and shiny in the field of AI/ML, classic statistical methods that leverage machine learning techniques remain powerful and practical for solving many real-world business problems.

Some really good points in this article that make sense if you think about it a bit more. I particularly like point #2 as anything that makes it easier to communicate with others definitely gets my vote.

Wednesday, 30 September 2020

Autograd: The Best Machine Learning Library You’re Not Using? by Kevin Vu via @Exxactcorp

 If there is a Python library that is emblematic of the simplicity, flexibility, and utility of differentiable programming it has to be Autograd.

This is great and contains code fragments in order to help you explore this with a view to using it.

Monday, 6 July 2020

The Most Important Fundamentals of PyTorch you Should Know by Kevin Vu via @Exxactcorp

PyTorch is a constantly developing deep learning framework with many exciting additions and features. We review its basic elements and show an example of building a simple Deep Neural Network (DNN) step-by-step.

This was incredibly clear and very useful as it contained code examples for you to learn from. Definitely recommended.

Monday, 4 May 2020

Lossless Image Compression through Super-Resolution by Sheng "Scott" Cao via @github

This is the official implementation of SReC in PyTorch. SReC frames lossless compression as a super-resolution problem and applies neural networks to compress images. SReC can achieve state-of-the-art compression rates on large datasets with practical runtimes. Training, compression, and decompression are fully supported and open-sourced.

This is really interesting and very useful. The link is to a paper in Github.

Wednesday, 11 March 2020

What AI still can’t do by Brian Bergstein via @techreview

Humans aren’t very good at understanding causation either.

I agree with Brian that we need to change our current AI thinking and combine a lot more sources of information using neural networks in order to get much better results.

Monday, 9 December 2019

Deep learning has hit a wall by Alex Woodie via @datanami

“The rapid growth in the size of neural networks is outpacing the ability of the hardware to keep up,” said Naveen Rao, vice president and general manager of Intel’s AI Products Group. Solving the problem will require rethinking how processing, network, and memory work together.

This sounds like a physical limitation that needs a two-pronged approach - one needs to be hardware advances but the other is an adaptation to the tools and techniques used to do AI and deep learning.

Monday, 4 November 2019

This New Google Technique Help Us Understand How Neural Networks are Thinking by @jrdothoughts via @TDataScience


Interpretability remains one of the biggest challenges of modern deep learning applications. The recent advancements in computation models and deep learning research have enabled the creation of highly sophisticated models that can include thousands of hidden layers and tens of millions of neurons.

I found this fascinating and it is worth a read as well as a bookmark.

Monday, 12 August 2019

Deep learning is about to get easier by Ben Dickson via @VentureBeat

One problem with deep learning algorithms is that they require vast amounts of data. Fortunately, researchers have found workarounds that will level the playing field.

This is really interesting and anything that helps more people take advantage of AI has got to be a great thing (if it has been tested to make sure you can rely on the answers).

Wednesday, 31 July 2019

The AI technique that could imbue machines with the ability to reason by Karen Hao via @techreview

“At six months old, a baby won’t bat an eye if a toy truck drives off a platform and seems to hover in the air. But perform the same experiment a mere two to three months later, and she will instantly recognize that something is wrong. She has already learned the concept of gravity.” Yann LeCun, the chief AI scientist at Facebook, hypothesizes that a lot of what babies learn about the world is through observation. And that theory could have important implications for researchers hoping to advance the boundaries of AI.

I definitely agree with his observation on the number of pictures needed for learning to generally take place which makes it NOTHING like the way a baby or young child would learn things in real-life. So unsupervised learning it is then.

Small example of k-means in R:

km <- kmeans(iris[,1:4], 3)
plot(iris[,1], iris[,2], col=km$cluster)
points(km$centers[,c(1,2)], col=1:3, pch=8, cex=2)
table(km$cluster, iris$Species)

Wednesday, 29 May 2019

A Brief Introduction To GANs by/via @SarvasvKulpati

With explanations of the math and code

This is a great article with lots of links and examples so you can understand it. If you already have a Medium account please make sure you give him some applause for it and a follow.

Monday, 6 May 2019

A Recipe for Training Neural Networks by/via @karpathy

A great blog post by Andrej Karpathy explaining how to avoid making common neural net mistakes. Worth a bookmark and a follow for him as a minimum I think.

I love that he goes through it step by step and lists so many pitfalls to avoid - surely if you follow ALL his advice you cannot fail??

Wednesday, 3 April 2019

Checklist for debugging neural networks by @CeceliaShao via @Medium

Tangible steps you can take to identify and fix issues with training, generalization, and optimization for machine learning models

This is great and definitely worthy of a bookmark and some applause if you have a Medium account.

Monday, 1 April 2019

How Artificial Intelligence Is Changing Science by Dan Falk via @QuantaMagazine

The latest AI algorithms are probing the evolution of galaxies, calculating quantum wave functions, discovering new chemical compounds and more. Is there anything that scientists do that can’t be automated?

This is a well written and well thought out article highlighting the amazing thinks that AI and ML can and could achieve in the search to advance our knowledge of the universe and general science. I'm really excited about what is likely to be discovered as we move forward.

Monday, 18 February 2019

Understand TensorFlow by mimicking its API from scratch by @elmd_ via @Medium

This great tutorial mimics TensorFlow’s API and implements the core building blocks from scratch, giving you an under-the-hood look at how TensorFlow’s deep learning libraries work.

I love this - it is very clear and easy to understand. You really need to bookmark this if you want to understand or learn about TensorFlow.

Monday, 21 January 2019

A Radical New Neural Network Design Could Overcome Big Challenges in AI by Karen Hao via @techreview

Researchers borrowed equations from calculus to redesign the core machinery of deep learning so it can model continuous processes like changes in health.

This is a really interesting development and looks like a great idea. I'm excited for the things that might be possible using this technique.

Thursday, 25 October 2018

The Main Approaches to Natural Language Processing Tasks by Matthew Mayo via @kdnuggets

Let's have a look at the main approaches to NLP tasks that we have at our disposal. We will then have a look at the concrete NLP tasks we can tackle with said approaches.

Good lists of approaches with examples that are useful for both the learner and the more experienced practitioner to keep on hand to remind you or them all.

Monday, 22 October 2018

The neural history of natural language processing by Sebastian Ruder via @_aylien

Here's a review of the last 15 years of natural language processing (NLP) research.

I love this and think it is worth a read if only to remind yourself on how far we have already come and that judging from the pace of change great things are always possible and coming at some point in the future.

Thursday, 13 September 2018

AI Knowledge Map: How To Classify AI Technologies by Francesco Corea via @kdnuggets

What follows is then an effort to draw an architecture to access knowledge on AI and follow emergent dynamics, a gateway of pre-existing knowledge on the topic that will allow you to scout around for additional information and eventually create new knowledge on AI.

I love the diagram and explanations in this article - it is worth printing and keeping to hand.

Thursday, 19 April 2018

Choose the right AI method for the job by Stephan Jou via @VentureBeat

It’s hard to remember the days when artificial intelligence seemed like an intangible, futuristic concept. Today, AI is everywhere. This has been decades in the making, however, and the past 90 years have seen both renaissances and winters for the field of study.

Some great comments from Stephan.