AGI Architecture
LLM and cognitive architecture are discussed more and more as LLM systems get more powerful. My interest in LLMs is cognitive architecture and fully automating a system. We see this in HuggingGPT and autoGPT where the LLM systems have an internal dialog with itself and it can reflect and iterate. I think these systems will get quite strong. In addition, plugins will make the system more powerful by expanding its IO. The memory plugin specifically will make such systems turing complete as the external memory can act as the tape and the LLM can be the state machine.
The danger of these runaway systems from self-dialog and RL enforcement should scare everyone because the model does not need to be smart to do significant damage. In fact if it has an incomplete picture of the world, it might do more damage. Some people already want to see the world burn with the creation of ChaosGPT. More regulation needs to come to this space before a kt extinction event. While most think itโs unlikely, even in the video of ChaosGPT, we can see the reasoning for creating one. Again the model doesnโt even need to be an AGI or that strong if it has access to the internet and can propagate. As the theory of computation states, simple mechanisms can create complex results. Given how many interactions can occur, many plugins can be created, itโs not clear how one can predict the result, especially if the model is non-deterministic and we are just rolling dice each time. The only way to find out is via simulation and that is not reassuring to humanity.
I do believe a pause in development is warranted and an assessment on cognitive architecture of these models. In addition, benefits of AI must be considered so its contributions are equally felt.
As for now, weโll just have to hold tight as AI takes over the world
PS I updated the Antarctic Solutions webpage and it looks better than ever!