Data Science Weekly - Issue 487
Curated news, articles and jobs related to Data Science
March 23 2023
Hello and thank you for tuning in to Issue #487.
This is Hannah and Sebastian, curators of the Data Science Weekly newsletter.
We appreciate your support :)
Once a week we write this email to share the links we thought were worth sharing in the Data Science, ML, AI, Data Visualization, and ML/Data Engineering worlds.
If you find this useful, please consider becoming paid subscriber here:
Hope you enjoy it.
And now, let's dive into some interesting links from this week:
Find words “halfway” between two others
This uses GPT-3 to find a word "halfway" between two others and it's built with Elixir, Phoenix, and Ash and is deployed on Fly.io…
Some fun ones:
- imagination and vacation -> fantasize
- enthusiasm and zeal -> fervor
- angry and hungry -> famished
- spoon and fork -> spork…
Online daters are less open-minded than their filters suggest [Pay-walled, free: Interactive graph available to play with]
Users with permissive settings show similar biases to those with restrictive ones…
Every Possible Wordle Solution Visualized
For a lot of us, Wordle has worked its way into that rotation. As I've been playing over the last year, my curiosity has had me wondering things like "how many possible words are there?", "how many words start and end with the same letter?", and "how many words use Y as the only vowel?". To try to answer those questions, and more, I built this Wordle visualization tool!…
A Message from this week's Sponsor:
Check out the results of the “MLOps is more than just tools” survey among ML practitioners
TheSequence partnered with Toloka to explore what MLOps culture looks like across the industry at the start of 2023. A huge variety of tools are available for ML development, but the culture and practices still have some catching up to do. TheSequence asked their community of over 155,000 data scientists, ML engineers and AI enthusiasts to share their thoughts about it. Toloka helped to bring in even more insights by promoting the survey in other top newsletters. Finally, TheSequence summarized the results of the survey and prepared the report we are excited to share here.
Want to sponsor the newsletter? Email us for details --> firstname.lastname@example.org
Data Science Articles & Videos
Topic Modeling for the People
In this quick and practical guide, I’m going to share a set of steps that you can follow to get coherent topics from most datasets. You can think of this like a topic modeling recipe. These tips are partly based on my personal experience and partly on important research done by others!…Two caveats: I’ll be focusing on latent Dirichlet allocation (LDA), but the tips about evaluation apply more broadly. And some of these tips are English-centric, as preprocessing steps like stemming can have different effects in other languages…
Comparing List Comprehensions vs. Built-In Functions in Python: Which Is Better?
In Python, a situation often arises when a programmer needs to choose between a functional programming approach, such as the built-in functions
reduce(), and the more Pythonic list comprehensions…In this article, we’ll explore the pros and cons of these distinct approaches through the lens of syntax, readability, and performance…
Graph Neural Networks in 2023
What are the actual advantages of Graph Machine Learning? And why do Graph Neural Networks matter in 2023? This article will recap on some highly impactful applications of GNNs, the first article in a series that will take a deep dive into Graph Machine Learning, giving you everything you need to know to get up to speed on the next big wave in AI…
audioFlux: A library for audio and music analysis, feature extraction
audiofluxis a deep learning tool library for audio and music analysis, feature extraction. It supports dozens of time-frequency analysis transformation methods and hundreds of corresponding time-domain and frequency-domain feature combinations. It can be provided to deep learning networks for training, and is used to study various tasks in the audio field such as Classification, Separation, Music Information Retrieval(MIR) and ASR etc…
Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics….This post only focuses on prompt engineering for autoregressive language models…At its core, the goal of prompt engineering is about alignment and model steerability…
In defense of prompt engineering
Prompt engineering as a discipline doesn’t get nearly the respect it deserves…The argument I see against both of these is the same: as AI language models get “better”, prompt engineering as a skill will quickly become obsolete. Investing time in learning prompt engineering skills right now will have a very short window of utility…I disagree…
GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models
We investigate the potential implications of Generative Pre-trained Transformer (GPT) models and related technologies on the U.S. labor market. Using a new rubric, we assess occupations based on their correspondence with GPT capabilities, incorporating both human expertise and classifications from GPT-4. Our findings indicate that approximately 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of GPTs, while around 19% of workers may see at least 50% of their tasks impacted. The influence spans all wage levels, with higher-income jobs potentially facing greater exposure…
The Bayesian Killer App
So what is the Bayesian killer app? That is, for people who don’t know much about Bayesian methods, what’s the application that demonstrates their core value? I have a nomination: Thompson sampling, also known as the Bayesian bandit strategy, which is the foundation of Bayesian A/B testing…
Privileged Bases in the Transformer Residual Stream
Our mathematical theories of the Transformer architecture suggest that individual coordinates in the residual stream should have no special significance…Recent work has shown that this observation is false in practice. We investigate this phenomenon and provisionally conclude that the per-dimension normalizers in the Adam optimizer are to blame for the effect…We explore two other obvious sources of basis dependency in a Transformer: Layer normalization, and finite-precision floating-point calculations. We confidently rule these out as being the source of the observed basis-alignment…
Eliciting Latent Predictions from Transformers with the Tuned Lens
Ever wonder how a language model decides what to say next?…Our method, the tuned lens, can trace an LM’s prediction as it develops from one layer to the next. It's more reliable and applies to more models than prior state-of-the-art…To do so, we train an affine probe for each block in a frozen pre-trained model, making it possible to decode every hidden state into a distribution over the vocabulary. Our method, the tuned lens, is a refinement of the earlier "logit lens" technique, which yielded useful insights but is often brittle…
What if -- despite all the hype -- we are in fact underestimating the effect LLMs [Twitter Thread]
What if -- despite all the hype -- we are in fact underestimating the effect LLMs will have on the nature of software distribution and end-user programming? some early, v tentative thoughts…
Instruct-NeRF2NeRF - Editing 3D Scenes with Instructions
We propose a method for editing NeRF scenes with text-instructions. Given a NeRF of a scene and the collection of images used to reconstruct it, our method uses an image-conditioned diffusion model (InstructPix2Pix) to iteratively edit the input images while optimizing the underlying scene, resulting in an optimized 3D scene that respects the edit instruction. We demonstrate that our proposed method is able to edit large-scale, real-world scenes, and is able to accomplish more realistic, targeted edits than prior work…
Software Developer Job Opportunity at Observable, Inc
SALARY AND HOURS: $107,640 - $150,000 per year; 40 hours per week.
EXPERIENCE AND REQUIREMENTS: Bachelors degree in Computer Science.
DESCRIPTION OF DUTIES:
Design, develop, test, deploy, maintain and improve software
Write code for Observable’s product and platform, create reliable and sustainable systems, and develop prototypes quickly
Write unit and integration tests to ensure the software is functioning correctly and securely
Deploy and release software at a regular cadence
Support and improve the software through on call and support tasks
Communicate and interact with users to understand their requirements and respond to their issues.
Collaborate on projects with designers, engineers and product managers.
Want to post a job here? Email us for details --> email@example.com
Training & Resources
Object Detection from Scratch - Part 1
This is the start of my new series, "Object Detection from Scratch", which is focused on building an intuition for how single-pass object detectors such as YOLO and SSD work...In this series, I will incrementally build up a YOLO/SSD (Single Shot Detector) object model with just PyTorch and the current version of the FastAI 2 library. Both SSD and YOLO allow for single pass inference and can run efficiently on fairly low-end hardware allowing for realtime object detection for video content etc…
Stanford CS224W: Machine Learning with Graphs
This course covers important research on the structure and analysis of such large social and information networks and on models and algorithms that abstract their basic properties. Students will explore how to practically analyze large-scale network data and how to reason about it through models for network structure and evolution…
MIT 6.S192: Deep Learning for Art, Aesthetics, and Creativity
MIT 6.S192: Deep Learning for Art, Aesthetics, and Creativity
Last Week's Newsletter's 3 Most Clicked Links
* Based on unique clicks.
** Find last week's issue #486 here.
Cutting Room Floor
nanoT5 (Encoder-Decoder / Pre-training + Fine-Tuning): Fast & Simple repository for pre-training and fine-tuning T5-style models
Everyone here seems focused on advanced modeling and CS skills. If you want a high paying job, IMO just focus on SQL and business metrics [Reddit Discussion]
OpenAI CEO, CTO on risks and how AI will reshape society [Video of ABC News interview]
App that uses ChatGPT for question-answering over all 365 episodes of the @lexfridman podcast
Thanks for joining us this week :)
All our best,
Hannah & Sebastian
If you enjoyed reading this,
please consider becoming paid subscriber here:
Copyright © 2013-2023 DataScienceWeekly.org, All rights reserved.