weekend ai reads for 2023-04-14

📰 ABOVE THE FOLD: AUTOGPT

  • A thread (Thread reader, no embedded images) (Twitter thread, sorry)

  • AgentGPT, a thing that sort of demonstrates how AutoGPT would go through to accomplish a broad task (requires an OpenAI API key)

  • via Patrick, to address early limitations, what will be key will be datasets that help “finetune” LLMs like ChatGPT.  Domain-specific training data with good labels will improve the performance of these models in an educational context. This article gives a nice summary of fine tuning. Written pre-GPT, this ImageNet for X Memo gives some insights. Towards Data Science, Google Drive

  • Generative Agents: Interactive Simulacra of Human Behavior Next Big Future

🏗️ FOUNDATIONS

🎓 EDUCATION and AI

📊 DATA & TECHNOLOGY

  • The mounting human and environmental costs of generative AI Ars Technica

  • via Josh, a blog post and a more interesting embedded video on context injection, a way to personalize GPT without retraining the whole thing Open Content

  • possibly related, via Dale, “a technical look at whether or not we can get large language AI to be value-specific” a PDF OpenAI

  • via Patrick, "large computational models have specific and interesting architectures that are important to understand"

    • Intro videos to deep neural networks from a leader in the field (YouTube channel)

    • Transformers (paper) – a key algorithmic and architectural choice inside many LLMs – work really well on a particular data type that happens to correspond to many human-created real-world things arXiv

🎉 FUN and/or PRACTICAL THINGS