Stanford PSYCH 290

Natural Language Processing & Text-Based Machine Learning in the Social Sciences

PSYCH290 will not be offered in the 2023/2024 academic year, as the Instructor is on sabbatical. We are planning to resume the course in Winter 2025


The Winter 2023 course application is HERE.
For context, last year we had about 28 applicants, and could admit about ~12 students into the class.
Decisions will be made in early January, 2023. Prioritization is generally by seniority.


Instructor: Johannes Eichstaedt (eich) (he/him)
Teaching Assistant: Maggie Harrington (mperry3) & Shashanka Subrahmanya (ssbrahma).
Class: Tue / Thu – 12.00pm to 1.30pm PT (90 mins) @ Building 460, Room 334
Office hours:
Johannes - TBD – Room 134 in Bldg. 420. Via zoom by quick email.
Shashanka - TBD in person / via zoom via calendly on canvas, or via slack or quick email.
Maggie - TBD in person / via zoom via calendly or slack/email Textbook: None. We will use papers/pdfs.
Prerequisites: Decent ability to code in R. Familiarity with multivariate regression and basic statistics of the social sciences. NOT required but helpful: Python, SSH, SQL (we will teach you what you need to know). Biggest requirement: knowing what science is, and wanting to learn.

NEW! All-In-One DLATK: Feat Extraction, Correlation, Topic modeling

The 2023 syllyabus is here

If you want to do something to prepare for the course, read Eichstaedt et al., 2020 from the readings folder

Ethics Content for the course was previously contributed by Kathleen Creel

Course Road Map:

Road Map by Week:

Week 1 - Intro to the course & SQL (Block 1)

Tuesday, 1/10 - Lecture 1 - Intro to course, why DLATK, intro to computing infrastructure
Thursday, 1/12 - Lecture 2 - Getting started with SQL workshop – PLEASE BRING YOUR LAPTOP TO CLASS

This week there will be lots of little tutorials to get you oriented to the stack – there are video tutorials, an optional linkedinlearning tutorial, and Jupyter tutorials on the server starting in lecture 2.

overview of Block 1 tutorials


Video VTutorials:

To get onto the server before lecture 2, please complete this quick start sequence:



Week 1, but less urgent:

Jupyter Tutorials:

These Jupyter notebooks tutorials will be in your home folder on the server.



The readings folder is here.

Tutorials and Homeworks **are always released on Thursday after class the latest, and due the next Thursday before class. **

Week 2 - Intro to NLP (Block 1, 2)

Tuesday, 1/17 - Lecture 3 (W2.1) - The field of NLP, different kinds of language analyses
Thursday, 1/19 - Lecture 4 (W2.2) - meet DLATK & feature extraction (intro to new tutorials)



Readings: (“due” by W3.1)

Week 3 - Dictionaries: GI, DICTION, LIWC (Block 2)

Tuesday, 1/24 - Lecture 5 - Dictionary eval, and history
Thursday, 1/26 - Lecture 6 - DLATK lex extraction, GI, DICTION



Readings: (“due” by Thursday, 1/26)

Week 4 - LIWC, annotation-based and sentiment Dictionaries (ANEW, LABMT, NRC) (Block 2)

Tuesday, 1/31 - Lecture 7 - LIWC, word-annotation based dictionaries ANEW, LabMT
Thursday, 2/2 - Lecture 8 - DLATK lexicon correlations, sentiment dicts NRC




Week 5 - Sentiment dictionaries and R (Block 2)

Tuesday, 2/7 - Lecture 9 - Types of Science with NLP, Intro to Open Vocab, power calculations
Thursday, 2/9 - Lecture 10 - data import, R and DLATK



Homework 5 files:
Messages CSV
Outcomes CSV


Week 6 - Introduction to open Vocab (Block 3)

Tuesday, 2/14 - Lecture 11 - Embeddings and Topics
Thursday, 2/16 - Lecture 12 - DLATK: 1to3gram feature extraction with occurrence filtering and PMI



Download Word cloud Powerpoint template


Week 7 - Embeddings and Topic Modeling (Block 4)

Tuesday, 2/21 - Lecture 13 – Mystery lecture
Thurday, 2/23 - Lecture 14 – DLATK for topics, and topic conceptual review



Readings: (“due” by Thursday, 2/23)

Week 8 - Intro to ML (Block 5)

Tuesday, 2/28 - Lecture 15 - Intro to Machine learning
Thursday, 3/2 - Lecture 16 - Final Projects Intro, Reddit Scraping, More Machine learning




Week 9 - ML: deep learning & pre-presentations (Block 5)

Tuesday, 3/7 - Lecture 17 - Deep Learning with Andy Schwartz Guest lecture by Andy Schwartz
Thursday, 3/9 - Lecture 18 - Final Project Pre-Presentations (please add to shared slide deck).



Readings: Read what’s relevant for your final projects!

Week 10 - Guest Lecture & LLMs

Tuesday, 3/14 - Lecture 19: Guest lecture by Ashwini
Thursday, 3/16 - Lecture 20: Overview of Generative Large Language Models and their Applications




Week 11 ( Finals week ) - Course summary and project presentations

Tuesday, 3/21 - Lecture 17 - Project presentations
Thursday, 3/23 - Lecture 19 - Project presentations



Readings: (read what’s relevant for your final projects)

Command logs for VTutorials

Basic logistics

This site will be kept up-to-date.

Readings are priotized (A > B > C), and are in our readings google drive folder.
VTutorials: Video tutorials are in the unlisted class youtube channel. Jupyter worksheets will be in your homefolder. Homeworks will be there or posted here.
Lecture slides are linked from Canvas.
Communication will happen via our slack channel – please access via Canvas to set up for the first time.

This is a crazy time. We will be maximally accomodating and supportive and understanding and do everything we can to support you. We kept the class small for that reason. No final exam, small weekly-ish homeworks, final 2 weeks is a team assignment. We’ll do our best to make this a fun, personal experience. We are glad you are here!!

FYI: Scope and Scheduled content

The sheet below summarizes the planned content.

Course background

What is this course?

This is an applied course with emphasis on the practical ability to deploy computational text analysis over data sets ranging from hundreds to millions of text samples – and mine them for patterns and psychological insight. These text sample can be social media posts, essays, or any other form of autobiographical writing. The goal is to practice these methods in guided tutorials and project-based work so that the students can apply them to their own research contexts. The course will provide best practices, as well as access to and familiarity with a Linux-based server environment to process text, including the extraction of words and phrases, topics and psychological dictionaries. It will also practice basic machine learning using these text features to estimate surveys scores that are associated with the text samples. In addition, we will practice how to further process and visualize the frequency of language variables in R for secondary analyses, with training on how to pull these variables directly into R from the database and server environment. In its entirety, the course aims to provide training in an entire state-of-the-art pipeline of computational text analysis, from text as input to final data visualization and secondary analysis in R. It will not focus on the mathematical theory behind these analyses or expect students to code their own implementation of text analyses. Familiarity with Python is helpful but not required. Basic familiarity with R is expected.

What, concretely, will we do in this course?

The course will heavily rely on a Python codebase (see that serves as a (fairly) user-friendly Linux-based front end to a large variety of Python-based NLP and machine learning libraries (including NLATK and Scikit-learn). The course will cover:

Who is this for?

What would we like you to learn?

The goal of the course is to empower students to carry out a variety of different text analysis methods independently, and how to write them up for peer review. At the end of the course, the student should:

In weeks 1-8:

I will give synchronous lectures on zoom Tuesdays and Thursdays (12:00-1:30pm). If we make good progress, I may reduce this to only a Tuesday lecture for some weeks to give you more time for the hands-on tutorials and the homework sets.

In addition, on your own time, every week, we have pre-recorded video tutorials for you. In these we ask you to follow along as Shrinidhi (and/or I) walk you through running analyses, etc.

More or less every week there will be a homework set that is based on what’s shown in the tutorials. Please submit these homeworks, we will grade them. They will not be particularly hard.

Classes are held over zoom and every lecture will be recorded so you can refer back to them later.

In weeks 9 & 10:

There will still be lectures, and maybe tutorials.

We will split into teams of 3-4 students. We will either give you a data set, or work with you to get one for you around a particular interest or research question (TBD, this can be quite tricky on short notice). You will go through the pipeline of methods you practiced in the course, and work together to write a final report that is a mock research paper: minimal introduction + methods + results with figures + discussion + supplement.

Homeworks: Assignments are given out on Thursday and are due the following Thursday before class.

Reading Types:

It’s a crazy time and I know sometimes there are hard trade-offs you have to make with your time between courses and life. I’ve kept the reading list as short as I can for that reason.

So I’m using the following system to demarcate how critical a reading is:

A – this is essential reading, giving you the intellectual scaffold to understand the main points of the course. Without reading these, you may miss entire sets of concepts and insights.
B – this is very helpful reading, if you miss this; you may miss single concepts or insights.
C – these readings build out your understanding. If you skip these, you may miss details.