Welcome to Insight Axis, where I make connections between practical philosophy, technology, books, science, and more. I’m Zan - follow me on Twitter (X), Threads and Substack.
I’m back from my summer break, and ready to resume the usual weekly schedule.
Silicon-based artificial general intelligence (AGI) should surpass human intelligence. This is because computers are already better than humans at tasks like complex calculations. Adding general, human-like intelligence to this foundation will mean that computers will eventually be more capable than human beings. To frame this, I propose are the 3 main levels of testing that a computer-based AGI must pass.
1. Calculation
Computers are incredible at processing data. Tell a computer what calculation to perform on a set of numbers, and it will make no mistakes as long as it's correctly programmed. Computers can do this faster than humans, and without getting tired. So, I believe calculation should be the bedrock of any further intelligence that we program into computers; it’s a shame that GPT-4 can write Shakespearian sonnets, but can’t do simple sums (the Wolfram plugin for GPT-4 is kinda cheating).
How can you test calculation? For simple calculations, we've been testing computers' capabilities for decades. But actually, our tests for advanced AI (like deep learning) are also tests of calculation. Let's take a standard deep learning algorithm as an example. These models have layers of nodes, which are each assigned a weight. Input data are then multiplied by these weights, as the data passes through the model. At the end of all this multiplying, the model classifies the input into a given category. Deep learning accuracy is thus a test of calculation; how well does the algorithm optimise its weights to give the right answer?
The above clip (source: 3Blue1Brown: But what is a neural network?) shows how a trained deep learning model takes an array of pixels (which humans immediately see as the number 7), and converts it into 784 input vectors. These are fed through layers of weighted “nodes” which multiply the vectors by different weights, to eventually classify the pixels as “7”. The model does all this without knowledge of what “7” truly means or any explicit reasoning (e.g. “it could be a 1 but looks more like a 7”). And apart from GPT's lacklustre "sparks” of reasoning, we aren’t close to the next step of intelligence testing.
2. Reasoning
Computers are good at logic, but that doesn't make them good at reasoning. Reasoning is about evaluating information, and using it in an explainable way to get a value-driven outcome.
Once computers can do immense calculation, the next thing you'd want them to do is to reason on their calculations. This means having a logical understanding and ability to interpret the data that is given to them in an explainable way. Currently, GPT passes narrow computational tests based on language. It can describe rules of games - but it can't implement those rules or reason out the best move. Similarly, even though AlphaGo is great at playing Go, it's not reasoning out moves. Its performance is based on computational power, not reasoning capability.
3. Creativity
The final test of intelligence is creativity. I invite you to consider creativity in a different way when it comes to intelligence. Think of creativity as the ability to acknowledge the rules, but to consciously and intuitively break them when appropriate (philosophical question: what tells you when it’s appropriate?). The essence of creativity means to disobey - to an extent. Creativity means breaking down what exists and reconstituting it into something new. It's about doing this without instructions, or even despite instructions to do the opposite.
Creativity is the final, scariest, and most exciting frontier for intelligent silicon-based computers. Testing creativity is hard, and I don’t think we should be doing it until we’ve got high-performing reasoning machines.
Recommended reading from Substack:
Do you have chaos friends?
explores these volatile beings.I love a good paradox.
writes about them.- calls for action on understanding the implications of progressing AGI.
For basics of machine learning (which is what deep learning and LLMs are based on), do read
and series on the foundations of ML.As a complementary essay to my thoughts in this piece - I highly recommend
and ‘s collaboration on how far we really are from AGI.- entwines self-regulation, awareness and mindfulness and executive function.
- takes the 26-module English alphabet to compose a masterpiece on the power of modularity in facilitating composability.
If you enjoyed this essay, please like, subscribe, and share it with others.
I learn from my readers, so leave a comment with your thoughts
It's kind of funny that computers can do calculations without issue but AI via ChatGPT cannot. It's notoriously bad at it. I agree with the creative aspect mentioned here, and I explored that a little bit 6 months ago.
https://www.polymathicbeing.com/p/can-ai-be-creative
Lastly, I'd add one more layer: Learning. It has to be able to form, unform, and reform concepts in a way that fosters learning. I'll go one step further here; it needs to know how to learn from others as well. That's the superpower of humans. Individually we aren't that smart, but we learn socially through mimicry and culture.
Hey, thanks for the mention! I'm in good company -- there are some solid names on that list.