In the know
For the past 3 and a half years, I’ve co-authored a weekly newsletter for my team in my role as a technical advisor at Amazon. Part of the newsletter includes a section called “In the know”, which gives updates on current events in and around technology and science: progress on large language models (LLMs), examples of tech being used for good, interesting (and readable) papers on machine learning, progress made in areas like renewable energy or medical discoveries, privacy and open-source intelligence tools (OSINT), political news on tech regulations and policies, nostalgic notes on internet lore, interesting open source projects, things happening inside the company, or other headlines that in some way intersect with the nebulous cloud formed by our team’s collective interests. The newsletter also includes an “Unsolicited Reading Recommendations”, which contains books the team is reading that others might find interesting. The goal for the newsletter is to help our team stay on top of stories that might be relevant to them. To allow us to have discussions about those things at a high level, or let others read more if they’d like to get deeper.
Generally the updates take the shape of a headline, followed by a short 2-5 sentence summary, and a link to the source. Sometimes, these updates invited a longer commentary: a “why this matters” blurb. I find my updates from a variety of sources, like Hacker News, Tech Meme, Ars Technica, Tech Crunch, Wired, LinkedIn, X/Twitter, LinkedIn, GitHub, and podcasts. I usually come across these while reading various articles during my morning coffee, or I might hear an interesting thought while listening to a podcast on a walk. The hardest part has been knowing what articles would be most interesting to the group. They have to fit into the center of the team’s Venn diagram of interests before they are added to the digest. Over time, it’s become second nature to know what to include and what to discard.
Last week, I was at the UN AI for Good conference in Geneva, and got to see Nick Thompson (CEO of The Atlantic) interview Geoffrey Hinton (Nobel laureate and one of the “godfathers of AI”). Typically I’ve found that at tech conferences, interviews between a host and a tech expert are generally one-sided. The interviewer asks a question, the interviewee responds. The interviewer may have a followup, but it’s likely surface level, as they’re not quite at the same level of depth as interviewee. The conversation is somewhat informative, but you’re constantly wishing it was deeper. But this was different; Thompson hadn’t just done his homework, he showed a deep and nuanced understanding of AI. He was well-read on the latest news and advancements in AI. Thompson was interviewing a literal founder of modern AI, and doing so with the knowledge of a grad student under Hinton’s mentorship.
I took an interest in Nick Thompson after this interview. On LinkedIn, he publishes a regular video series on “the most interesting thing in tech,”. This follows a pattern of other respected publishers I’ve been following like Simon Willison and Ethan Mollick who do a wonderful job of sharing stories and breaking down complex topics so that the reader (or viewer) can walk away with a practical understanding. This blog will attempt to do the same. I’ll be sharing some of the stories, thoughts, and books I put into our weekly newsletter, and why they matter. I’m doing this in the hopes that it gives readers a better understanding of the things happening around them.
While on the topic of AI …
I’ll admit LLMs are a great tool for helping with writing. Feed them enough examples of your writing and tell them what you want to talk about and they’ll do a reasonable job of doing the work for you in a style that can be nearly indistinguishable from your own. But they’re also thinking for you, and this is dangerous (and a bit sad). In my writing, I’ll be transparent when I use an LLM to help me. No AI was used in this post1, although it was very tempting to have it help me write a good closing statement to the paragraph above. I’ll be working on a tool to automatically calculate the percentage of my posts that are created or modified by AI, and am looking forward to open-sourcing it for others.
I pasted this entire post into Google Docs to do a grammar check. It found a few errors that I corrected. Technically, grammar checks are done with natural language processing (NLP), an early subset of AI that dates back to the 1950s and 1960s. But to borrow John McCarthy’s famous observation, “as soon as it works, no one calls it AI anymore.”