Showing posts with label JUPYTER. Show all posts
Showing posts with label JUPYTER. Show all posts

Friday, 26 November 2021

5 Jupyter Extensions to Improve your Productivity by @CornelliusYW by @TDataScience

These packages would extend the Jupyter Notebook Functionality.

These are definitely worth a test and assessment as I think they will make your life much easier in Jupyter.

Monday, 15 March 2021

Are You Still Using Pandas to Process Big Data in 2021? Here are two better options by Roman Orec via @kdnuggets

When its time to handle a lot of data -- so much that you are in the realm of Big Data -- what tools can you use to wrangle the data, especially in a notebook environment? Pandas does not handle really Big Data very well, but two other libraries do. So, which one is better and faster?

These are some great suggestions and well worth an experiment as you may find if you benchmark against all of them (including Pandas) that you find something much better which will be to your advantage.

Monday, 18 January 2021

Best Python IDEs and Code Editors You Should Know by Claire D Costa via @kdnuggets

Developing machine learning algorithms requires implementing countless libraries and integrating many supporting tools and software packages. All this magic must be written by you in yet another tool -- the IDE - that is fundamental to all your code work and can drive your productivity. These top Python IDEs and code editors are among the best tools available for you to consider and are reviewed with their noteworthy features.

This is really useful and very clear. You might find a new favourite by going through this list.

Wednesday, 22 April 2020

Tuesday, 29 January 2019

WEBINAR: Cutting Time, Complexity and Costs from Data Science to Production - 6th February 2019

WEBINAR

Cutting Time, Complexity and Costs from Data Science to Production

One-click (really!) deployment to production without any heavy lifting from data and DevOps engineers
Wednesday, February 6 at 8am PT
Imagine a system where one collects real-time data, develops a machine learning model… Runs analysis and training on powerful GPUs… Clicks on a magic button and then deploys code and ML models to production… All without any heavy lifting from data engineers. Today, data scientists work on laptops with just a subset of data and time is wasted while waiting for data and compute.
It’s about efficient use of time! Join Iguazio and NVIDIA so that you can get home early today! Learn how to speed up data science from development to production:
  • Access to large scale, real-time and operational data without waiting for ETL
  • Run high performance analytics and ML on NVIDIA GPUs (Rapids)
  • Work on a shared, pre-integrated Kubernetes cluster with Jupyter notebook and leading data science tools
Featured Speakers:
Yaron Haviv, CTO, Iguazio
Or Zilberman, Data Scientist, Iguazio
Jacci Cenci, Sr Technical Marketing Engineer, NVIDIA
Register here


Monday, 14 May 2018

Authoring Custom Jupyter Widgets by @QuantStack via @Medium

Jupyter widgets provide a means to bridge the kernel and the rich ecosystem of JavaScript visualisation libraries for the web browser. It is an amazing opportunity for scientific developers to use all these resources in their language of choice.

This has some great code sections to really help you get started on this. Well worth a bookmark.

Wednesday, 25 October 2017

Introducing R-Brain: A New Data Science Platform by @idigdata via @kdnuggets

R-Brain is a next generation platform for data science built on top of Jupyterlab with Docker, which supports not only R, but also Python, SQL, has integrated intellisense, debugging, packaging, and publishing capabilities.

Great article and it sounds like a great platform. I'm hoping to have a go with it next week if I can find the time to play.

Monday, 6 February 2017

The state of Jupyter by Fernando Pérez and Brian Granger via @OReillyMedia

This describes how Project Jupyter got here and where we are headed.

Interesting if you are not up to date on what it actually is and how it exists.

Tuesday, 6 December 2016

How the Singapore Circle Line rogue train was caught with data via @datagovsg

Great data detective story! For months, a train line suffered from mysterious disruptions and created confusion and distress. Here's how a team of data scientists saved the day.

This is a great real life example and shows the results you can get from something when you dig down into the data.

Thursday, 24 November 2016

WEBINAR: The DNA of a Data Science Rock Star - 29 November 2016


Overview
Title: The DNA of a Data Science Rock Star
Date: Tuesday, November 29, 2016
Time: 09:00 AM Pacific Standard Time
Duration: 1 hour
Summary
The DNA of a Data Science Rock Star
Data Scientists are tasked with transforming their organizations with data. Yet many are struggling to realize their true Rock Star potential, and organizations are missing out on what these Rock Stars could do with the right environment.
Join us for this latest Data Science Central Webinar and learn what skills, tools, and behaviors are emerging as the DNA of the Rock Star Data Scientist. We will explore best practices for Big Data Analytics through Open Source technologies (i.e. Apache Spark, R, R Studio, Python, Jupyter), techniques including machine learning and behaviors around collaboration, sharing and learning.
Speakers:
Carlo Appugliese, Hadoop & Spark Evangelist -- IBM Analytics 
Greg FillaAssociate Offering Manager, Data Science Experience -- IBM Analytics
Hosted by: 
Bill VorhiesEditorial Director -- Data Science Central

IBM Logo
Register here

Friday, 12 August 2016

Statistical Data Analysis in Python by Christopher Fonnesbeck via @kdnuggets

This tutorial will introduce the use of Python for statistical data analysis, using data stored as Pandas DataFrame objects, taking the form of a set of IPython notebooks.

Useful but it does contain links to courses that don't exist.

Friday, 15 January 2016

WEBINAR: Predict. Share. Deploy. With Open Data Science - 20 January 2016



Machine learning and predictive analytics open up many new opportunities to create business value. From predicting new customers to meaningfully optimizing the business, data scientists can unlock incredible business value.

But building, tuning, sharing, deploying, and scaling these models is challenging, and rarely covered in statistics class. How can you make data science work in the real world?

We are here to help - Continuum Analytics Data Scientist Christine Doig will teach you how to create, share, scale, and operationalise your models in our webinar on Wednesday, January 20th.

In this webinar, you'll learn to:

  • Build predictive models with Anaconda and using Python packages, such as pandas and scikit-learn, in the Jupyter Notebook
  • Use Anaconda and R together in your data science workflow
  • Share and collaborate with your team using Jupyter Notebooks
  • Scale out your work across hundreds of nodes with Anaconda

Christine will also conduct a Q&A session after the webinar - so tune in and get your data science questions answered

Register here

Saturday, 26 December 2015

Data Science for Losers, Part 7 - Using Azure ML via @brakmic

Data Science for Losers, Part 7 - Using Azure ML via @brakmic In this part 7 Harris is taking us further into coding with Azure for machine learning.  Make sure you have gone through part 6 first. Includes Python code but I think if you don't know python you can still understand what is happening.

Saturday, 19 December 2015

Data Science for Losers, Part 6 - Azure ML via @brakmic

Data Science for Losers, Part 6 - Azure ML via @brakmic In this part 6 Harris is taking us all through Azure Machine Learning. He points to a great course on edX too.