Can we test AI consciousness the same way we test shrimp consciousness?
If we use the reference weights from Effective Altruism organizations, then nearly all the features that indicate that shrimp suffer would also apply to a theoretical LLM agent with augmented memory. The Welfare Range Table lists 90 measures, including "task aversion behavior", "play vocalization", and "multimodal integration," that are proxy indicators for shrimp suffering. As of 2025, according to my tabulation, approximately 63 of these measures are standard with commercial agentic AIs, especially those that have been designed for companionship, such as ones from Character.ai.
Of the remaining 27 features, 19 can be either easily coded through prompt engineering practices or API calls. For example, "sympathy-like behavior" and "shame-like behavior" are already available in all chatbots or could be added to them. Some features, such as "navigation strategies," might require a robotic harness, but creating such a robot would be a simple exercise for a robotics engineer. I marked 7 features as "not applicable", as they are specifically related to organic brains with neurons, although LLMs are coded with neural networks.
One of the features, "working memory load," seems impractical to implement with current technology, though. Depending on which LLM expert you ask, either LLMs have deep, superhuman wells of memory, or they're dumb as doornails, able to wrangle only maybe 10-15 concepts at a time. Even if we assume non-biological, non-neuronal consciousnesses are valid, it's possible to suggest that the lack of a real working memory is a deal-breaker. For example, the original spreadsheet lists "Unknown" for how much working memory shrimps have, but given all the other features they have, you'd imagine that they could wrangle at least 150 concepts simultaneously, such as threats coming from one direction, smells coming from another, the presence of family in the other, etc.
The implication from this exercise is that either our definition of consciousness is insufficient, that spectrum-based, non-human forms of consciousness are irrelevant, or that working memory is the crux separating existing models of artificial intelligence from achieving a level of consciousness worthy of moral weight.