Judging AI on its ability to expand capacity
From Evgeny Morozov’s essay Socialism After AI:
The driving imperative would not be “growth” measured as ever more commodities, but the enlargement of what people are actually able to do and be, individually and collectively.
On that view, AI would be judged by whether it opens new spaces of competence, understanding, and cooperation, and for whom. A tool that lets teachers and students work in their own dialects, interrogate history from their vantage points, and share and refine local knowledge would score highly. One that smooths people into passive consumers of autogenerated sludge, or concentrates interpretive power in a handful of machine‑learning priests, would score poorly, whatever its efficiency.
Agency… on the individual, community, state, and interstate levels.
Instead of one giant super model, imagine a plethora of models trained by local communities, libraries, schools, cooperatives, firms, etc… and then a fleet of search engine-like agents that can pool information from across that sea of local models, then contextualize that information in both in a data synthesis sense but also in terms of cultural and historical context. These would be small language models (SLMs) that wouldn’t require massive data centers to train, but due to their modularity and composability, a network of which could rival the hyperscaler Monolith Models. Basically… an internet of AIs. Furthermore SLMs are a rejection of technological universalism and could lead to emergent properties not possible under the centralized, consolidated, command economies of LLMs we are currently hurtling towards now.
… anyways, some late night unstructured noodling as I read this lengthy article by Evgeny trying to imagine a better world and a better socialism debate.