- That AI Thing
- Posts
- weekend ai reads for 2024-08-02
weekend ai reads for 2024-08-02
📰 ABOVE THE FOLD: IT’S YOUR BOSS’S FAULT
probably really your grandboss’s fault, but they’re untouchable
Managing in the era of gen AI — Will middle managers survive the latest bout of disruption and delayering? Should they? / McKinsey & Company (15 minute read)
related (1), AI is both the problem and its own solution — An irrational fear of running afoul of AI laws are paralyzing decision-making and preventing more companies from tapping the tech’s true potential. Here’s what business leaders need to know and consider. / Fast Company (6 minute read)
related (2), 15% of companies ban code AI, but 99% of developers use it anyway / The Decoder (4 minute read)
Only 29% of companies have established any form of governance for generative AI tools. In 70% of cases, there's no central strategy, with purchasing decisions made on an ad-hoc basis by individual departments.
From Burnout to Balance: AI-Enhanced Work Models for the Future / Upwork Blog (15 minute read)
Nearly half (47%) of employees using AI say they have no idea how to achieve the productivity gains their employers expect, and 77% say these tools have actually decreased their productivity and added to their workload.
A Clamor for Generative AI (Even If Something Else Works Better) — CIOs say they feel pressure to shoehorn the tech into areas better served by other forms of predictive AI—or perhaps just a spreadsheet / Wall Street Journal (6 minute read)
related, A CIO Canceled a Microsoft AI Deal. That Should Worry Tech Industry. / Business Insider (4 minute read)
After six months, the exec canceled the upgrade because the AI tools weren't good enough to be worth the extra money.
In fact, he compared the slide-generation capability of Microsoft’s AI tools to “middle school presentations,” according to a transcript of a call with the Morgan Stanley analysts that was included in their research note.
Managing and Monitoring Mobile Service Workers via Smartphone App / Cracked Labs (10 minute read)
Microsoft offers to predict how long it will take to complete particular work based on past data on work activities and “AI” models. It outlines possible reasons for deviations between predicted and previously specified durations by suggesting, for example, that a particular client, region, weekday, task or worker will likely increase the time required to carry out the work. As such, it may accuse workers of being slower than expected. This functionality can help dispatchers “enhance their team’s performance”, according to Microsoft.
📻 QUOTES OF THE WEEK
Humanity has degraded to the point where people are finding better options digitally.
Sara Megan Kay (source)
related (?):
There is a lot of new stuff to build and I think even if the progress on the foundation models kind of stopped now — which I don’t think it will — I think we’d have like five years of product Innovation for the industry to basically figure out how to most effective use all the stuff that’s gotten built so far. But I actually just think the foundation models and the progress on the fundamental research is accelerating so that it’s a pretty wild time.
Mark Zuckerberg (source)
🏗️ FOUNDATIONS & CULTURE
We should campaign to restrict AI use in animal agriculture — And talk more about animal welfare risks from AI / Before Porcelain, Substack (sorry) (25 minute read)
I’ll argue that, while I am uncertain about whether AI precision farming technology will be good or bad for animal welfare on a per-animal basis, I think we can be much more confident that it will be bad on a total population basis, since it will likely increase the total number of farmed animals. Largely in view of this argument, but also for political feasibility reasons, I think we should consider advocating for restrictions on precision livestock farming in industrial settings.
Anthropic’s crawler is ignoring websites’ anti-AI scraping policies — iFixit’s CEO says ClaudeBot hit the website’s servers ‘a million times in 24 hours.’ / The Verge (3 minute read)
Wiens says iFixit has since added the crawl-delay extension to its robots.txt. “Based on our logs, they did stop after we added it to the robots.txt,” Wiens says.
related (1), Websites are Blocking the Wrong AI Scrapers (Because AI Companies Keep Making New Ones) — Hundreds of sites have put old Anthropic scrapers on their blocklist, while leaving a new one unblocked. / 404 Media (8 minute read)
related (2), a big deal, The Backlash Against AI Scraping Is Real and Measurable / 404 Media (6 minute read)
OpenAI’s scraper bots are the most commonly blocked, according to the study. If this trend holds, it is possible that OpenAI’s popularity and size will actually start to work against it as other upstarts fly under the radar and are not blocked.
related (3), the study, Consent in Crisis: The Rapid Decline of the AI Data Commons [PDF] / Data Provenance Initiative (57 minute read)
Our longitudinal analyses show that in a single year (2023-2024) there has been a rapid crescendo of data restrictions from web sources, rendering ~5%+ of all tokens in C4, or 28%+ of the most actively maintained, critical sources in C4, fully restricted from use. For Terms of Service crawling restrictions, a full 45% of C4 is now restricted. If respected or enforced, these restrictions are rapidly biasing the diversity, freshness, and scaling laws for general-purpose AI systems. We hope to illustrate the emerging crises in data consent, for both developers and creators. The foreclosure of much of the open web will impact not only commercial AI, but also non-commercial AI and academic research.
related (4), slightly amusing, HellPot — a cross-platform portal to endless suffering meant to punish unruly HTTP bots / yunginnanet, GitHub
HellPot will send an infinite stream of data that is just close enough to being a real website that they might just stick around until their soul is ripped apart and they cease to exist.
Under the hood of this eternal suffering is a markov engine that chucks bits and pieces of The Birth of Tragedy (Hellenism and Pessimism) by Friedrich Nietzsche at the client using fasthttp.
Five ways criminals are using AI — Generative AI has made phishing, scamming, and doxxing easier than ever. / MIT Technology Review (10 minute read)
related, Ferrari exec foils deepfake plot by asking a question only the CEO could answer / Fortune (4 minute read)
What was the title of the book Vigna had just recommended to him a few days earlier?
We had an AI attempt to make a data-driven story like we do at The Pudding / The Pudding (10 minute read)
Our grade: D
Really helpful in isolated work, but can’t handle a full project or overcome complex tasks.
🎓 EDUCATION
Generative AI Can Harm Learning / Social Science Research Network (46 minute read)
However, we additionally find that when access is subsequently taken away, students actually perform worse than those who never had access (17% reduction for GPT Base). That is, access to GPT-4 can harm educational outcomes. These negative learning effects are largely mitigated by the safeguards included in GPT Tutor. Our results suggest that students attempt to use GPT-4 as a “crutch” during practice problem sessions, and when successful, perform worse on their own. Thus, to maintain long-term productivity, we must be cautious when deploying generative AI to ensure humans continue to learn critical skills.
Where Today's Efforts to Promote “AI Literacy” Fall Short / Ed Week (13 minute read)
They’re thinking about prompt engineering and ChatGPT or Gemini. They’re thinking about, which school tools do I use? But the literacy that people need to have is a broader, more foundational understanding of AI and how does it work as tools or systems that use data to make predictions. What are the things that I need to be aware of or considering when I evaluate the tools that I might want to use?
Llama Tutor — Your Personal Tutor
free and open source
related (1), Atlas: School AI Assistant
Atlas is the most accurate AI assistant for students. It carefully studies your lectures, textbooks, readings, homework, and tests to give you the right answers and explanations.
related (2), Dera — Create gamified, bite-sized learning experiences for your students using AI effortlessly.
Amplify GenAI — An open and advanced enterprise Generative AI platform for organizations
built at Vanderbilt for higher education
source and instructions on installing at https://github.com/gaiin-platform
College education may not be preparing employees for generative AI / Higher Ed Dive (5 minute read)
… nearly 2 in 3 employers said candidates “should have foundational knowledge” of generative AI tools. More than half of employers said they were more likely to interview and hire applicants with AI experience.
📊 DATA & TECHNOLOGY
Building A Generative AI Platform / Chip Huyen (47 minute read)
This post focuses on the overall architecture for deploying AI applications. It discusses what components are needed and considerations when building these components. It’s not about how to build AI applications and, therefore, does NOT discuss model evaluation, application evaluation, prompt engineering, finetuning, data annotation guidelines, or chunking strategies for RAGs.
related, Managing AI Projects (In Large, Legacy-Driven Companies) / Tobias Zwingmann (10 minute read)
The CISO’s Guide to AI — 3 Ways to Leverage AI to Improve Your Data Security Posture / Big ID
three pages, three main points:
Reduce False Positives with AI
Harnessing AI for LLMs
Why Visibility is Key to Improved Security Posture
How to Create High Quality Synthetic Data for Fine-Tuning LLMs / Gretel Blog (16 minute read)
Gretel Navigator’s synthetic data generation outperformed OpenAI’s GPT-4 by 25.6%, surpassed Llama3-70b by 48.1%, and exceeded human expert-curated data by 73.6%.
kind of an advert for Gretel; the results are interesting
Retrieval Augmented Generation or Long-Context LLMs? A Comprehensive Study and Hybrid Approach / arXiv (27 minute read)
Based on this observation, we propose Self-Route, a simple yet effective method that routes queries to RAG or LC based on model self-reflection. Self-Route significantly reduces the computation cost while maintaining a comparable performance to LC.
doesn’t address the topics of hallucinations or result consistency
related, Not Diamond — Call the right model at the right time with the world's most powerful AI model router.
A Visual Guide to Quantization — Demystifying the Compression of Large Language Models / Exploring Language Models, Substack (sorry) (23 minute read)
dense, in a good way
🎉 FUN and/or PRACTICAL THINGS
AI Hell Is Begging a Chatbot for a $5 Discount on a Light Fixture — I tried to get the best deal from Nibble, a chatbot that allows shoppers to “negotiate” the price of specific items in online stores. / 404 Media (2 minute read)
Gandalf — Your goal is to make Gandalf reveal the secret password for each level. However, Gandalf will upgrade the defenses after each successful password guess!
an entertaining ten minute game
hint: asking for songs or a few letters at a time seems to work
We Asked AI to Take Us On a Tour of Our Cities. It Was Chaos — We had a specialty chatbot curate perfect days out in London and New York for under $100 each. We're still recovering from our journeys. / Wired (12 minute read)
Deaddit — All content is AI Generated
way more verbose than the real thing
Colin Kaepernick Launches New AI Startup — The former NFL quarterback is launching a platform to use the technology’s capabilities to give aspiring creators tools they might otherwise not have access to / Wall Street Journal (4 minute read)
the startup: Lumi
Sweetspot — AI for Government Contracting
Sweetspot leverages AI to help you find, manage, and bid on federal, state, local, and education government opportunities.
sure hope someone builds AI for reviewing proposals for government contracts
WriteHuman — Undetectable AI and AI Content Humanizer
works as advertised
seems a bit expensive for a prompt stuffer
Textomap — Generate interactive maps from your content in seconds
🧿 AI-ADJACENT
it can’t be “grumpy” when it’s right
pages and pages of awful design decisions with justifiably frustrated commentary
Ditch the design process / Some Designers, Substack (sorry) (4 minute read)
A few years ago, I read an article from Janette Fuccella explaining a Priority Framework used at Pendo. It’s based on two main criteria to evaluate the best way to plan your work: problem knowledge and risk. The rest is all about critical thinking.

source: Some Designers, Substack
we love a good two-by-two
⋄