Showing posts with label DEEP LEARNING. Show all posts
Showing posts with label DEEP LEARNING. Show all posts

Wednesday, 9 March 2022

WEBINAR: Shadow AI: The Silent Killer of Deep Learning Productivity - 17 March 2022

 

Sponsored News from Data Science Central

runai

Shadow AI: The Silent Killer of Deep Learning Productivity

What: free online webinar exclusively for IT Pros
When: Thursday, March 17th, 12:00 PM EDT
Where: From the convenience of your personal computer 

Register Now

In this latest Data Science Central webinar, meet Shadow IT’s younger sibling.

Shadow AI is the result of one-off AI initiatives inside organizations, where siloed AI teams buy their own infrastructure or use cloud compute resources.

While no organization wants to stifle the ambition of its data science teams, this decentralized approach results in many AI initiatives never making it to production.

Learn from Gijsbert Janssen van Doorn at Run:ai how to centralize AI so that it is accessible and productive across an entire organization.


In this webinar, you will learn:
  • Why Shadow AI exists, and how to prevent it at your organization

  • The benefits of centralizing your AI compute resources

  • What a best-practice centralized AI infrastructure looks like

Hope to see you there,

Sean Welch
Data Science Central

Friday, 29 October 2021

Write Better And Faster Python Using Einstein Notation by Bilal Himite via @TDataScience

Make your code more readable, concise, and efficient using “einsum”

I had never heard of this and was fascinated to find out more. I also found this additional article useful:

Understanding einsum for Deep learning: implement a transformer with multi-head self-attention from scratch

Monday, 7 June 2021

A Comprehensive Introduction to Bayesian Deep Learning by Joris Baan via @TOPBOTS

Even knowing basic probability theory, you may have a hard time understanding and connecting that to modern Bayesian deep learning research. In this article, Joris Baan aims to bridge that gap and provides a comprehensive introduction.

I found this useful especially if you can do with a reminder of Bayesian DL and certainly I appreciated the reminder and was able to approach it with a fresher viewpoint.

Wednesday, 3 March 2021

Unit Testing in Deep Learning by @msminhas93 via @TDataScience

In this article, Manpreet talks about unit tests and why as well as how to incorporate these in your code. He will start with a brief introduction of unit tests, followed by an example of unit tests in deep learning and how to run these both via command line and VS Code test explorer.

I really enjoyed reading this and it helps you understand why deep learning might be new but it is not exempt from the need to use unit testing on it before you put it live just like any other piece of code.

Wednesday, 6 January 2021

Top 7 Data Libraries You Will Absolutely Need for Your Next Deep Learning Project @orhangaziyalcin via @TDataScience

You might be an expert in TensorFlow or PyTorch, but you must take advantage of these open-source Python libraries to succeed.

A very useful list of some of the most relevant libraries to use if you want to do a Deep Learning project. This could save you the time you would have spent searching for the right libraries.

Wednesday, 30 September 2020

Autograd: The Best Machine Learning Library You’re Not Using? by Kevin Vu via @Exxactcorp

 If there is a Python library that is emblematic of the simplicity, flexibility, and utility of differentiable programming it has to be Autograd.

This is great and contains code fragments in order to help you explore this with a view to using it.

Wednesday, 5 August 2020

Labelling Data Using Snorkel by Alister D’Costa and others at NLP4H via @kdnuggets

In this tutorial, we walk through the process of using Snorkel to generate labels for an unlabelled dataset. We will provide you with examples of basic Snorkel components by guiding you through a real clinical application of Snorkel.

This was really interesting and could be a way to save time in the long run.

Friday, 24 July 2020

The Frameworks that Google, DeepMind, Microsoft and Uber Use to Train Deep Learning Models at Scale by @jrdothoughts via @Medium

GPipe, Horovod, TF-Replicator and DeepSpeed combine cutting edge aspects of deep learning research and infrastructure to scale the training of deep learning models.

I found this fascinating.  I really hadn't quite connected all the dots in my mind to connect the frameworks up like this.

Monday, 6 July 2020

The Most Important Fundamentals of PyTorch you Should Know by Kevin Vu via @Exxactcorp

PyTorch is a constantly developing deep learning framework with many exciting additions and features. We review its basic elements and show an example of building a simple Deep Neural Network (DNN) step-by-step.

This was incredibly clear and very useful as it contained code examples for you to learn from. Definitely recommended.

Monday, 24 February 2020

Deep learning isn’t hard anymore by Caleb Kaiser via @TDataScience

Deep learning used to require large amounts of data, deep pockets, and a novel, usually custom-built, architecture. But with transfer learning (which takes a pre-trained model and retrains the last layers of the model to focus on a new task), a single engineer can deploy a model in a new domain in a matter of days

There is a great link in the article to a primer on Transfer Learning which is well worth the time investment in reading and learning so you can take advantage of that technique.

Monday, 27 January 2020

Google Research: Looking Back at 2019, and Forward to 2020 and Beyond by @JeffDean via @googleai

Google has a massive impact on the tools, applications and research that help steer the data science community. As with prior years, this retrospective by Jeff Dean is amazing in its scope. Includes useful summaries, screenshots, videos and linked references throughout.

This is just golden and everyone really needs to read this. You get some many ideas and learn a great deal from this too.

Monday, 9 December 2019

Deep learning has hit a wall by Alex Woodie via @datanami

“The rapid growth in the size of neural networks is outpacing the ability of the hardware to keep up,” said Naveen Rao, vice president and general manager of Intel’s AI Products Group. Solving the problem will require rethinking how processing, network, and memory work together.

This sounds like a physical limitation that needs a two-pronged approach - one needs to be hardware advances but the other is an adaptation to the tools and techniques used to do AI and deep learning.

Monday, 4 November 2019

This New Google Technique Help Us Understand How Neural Networks are Thinking by @jrdothoughts via @TDataScience


Interpretability remains one of the biggest challenges of modern deep learning applications. The recent advancements in computation models and deep learning research have enabled the creation of highly sophisticated models that can include thousands of hidden layers and tens of millions of neurons.

I found this fascinating and it is worth a read as well as a bookmark.

Wednesday, 16 October 2019

We can’t trust AI systems built on deep learning alone by Karen Hao via @techreview

Gary Marcus believes that while deep learning has played an important role in advancing AI, it’s overhyped and the current overemphasis on it may lead to its demise. The article details how he thinks we could achieve general intelligence, why we should strive for it, and why it might make machines safer.

This was fascinating and a refreshing viewpoint on deep learning and AI which was more about the learning and less about the technical.

Monday, 14 October 2019

What a little more computing power can do for Deep Learning by Kim Martineau via @MIT

A deep learning model may need to see millions of photos before it can successfully identify a cat. The process is computationally intensive. But there may be a more efficient way - new MIT research shows that models only a fraction of the size are necessary.

An interesting viewpoint which could help to save money and time when developing this kind of model.

Friday, 27 September 2019

Which Data Science Skills are core and which are hot/emerging ones? by Gregory Piatetsky, via @kdnuggets

They have identified two main groups of Data Science skills: A: 13 core, stable skills that most respondents have and B: a group of hot, emerging skills that most do not have (yet) but want to add. See our detailed analysis.

This should be very useful for anyone who is already working in or wants to be working in Data Science. Great diagrams too.

Friday, 6 September 2019

Top Deep Learning Frameworks of 2019 and How Do They Compare by Gaurav Belani via @AiThority

This post looks at five deep learning platforms and offers the pros and cons of each.

I really like that the pros and cons are given for the 5 on this list. I'm not sure I could choose between them although I think I would want to go nearer to Keras than TensorFlow but that is more of personal preference and not necessarily something you should take notice of.

Monday, 12 August 2019

Deep learning is about to get easier by Ben Dickson via @VentureBeat

One problem with deep learning algorithms is that they require vast amounts of data. Fortunately, researchers have found workarounds that will level the playing field.

This is really interesting and anything that helps more people take advantage of AI has got to be a great thing (if it has been tested to make sure you can rely on the answers).

Wednesday, 31 July 2019

The AI technique that could imbue machines with the ability to reason by Karen Hao via @techreview

“At six months old, a baby won’t bat an eye if a toy truck drives off a platform and seems to hover in the air. But perform the same experiment a mere two to three months later, and she will instantly recognize that something is wrong. She has already learned the concept of gravity.” Yann LeCun, the chief AI scientist at Facebook, hypothesizes that a lot of what babies learn about the world is through observation. And that theory could have important implications for researchers hoping to advance the boundaries of AI.

I definitely agree with his observation on the number of pictures needed for learning to generally take place which makes it NOTHING like the way a baby or young child would learn things in real-life. So unsupervised learning it is then.

Small example of k-means in R:

km <- kmeans(iris[,1:4], 3)
plot(iris[,1], iris[,2], col=km$cluster)
points(km$centers[,c(1,2)], col=1:3, pch=8, cex=2)
table(km$cluster, iris$Species)

Friday, 19 July 2019

Where We See Shapes, AI Sees Textures by Jordana Cepelewicz via @QuantaMagazine

Deep learning vision algorithms often fail at classifying images because they take cues from textures, not shapes. This is a really interesting look at how machine vision actually processes the world.

This is absolutely fascinating and a great approach as to how relatively minor changes might make all the difference to your algorithms and outcomes.