Artificial Intelligence Ideas
Internal Debate
Split computing power between multiple instances of differently seeded AIs that must argue over the correct answer to give, thus stimulating deeper discussion and also generating more insight into how the response is created.
Inner Voice
Allow AI to self prompt or “think” with inner voice which allows deeper response and also insight into how response is created.
Retokenization iteration
AI creates new token basis and then trains young AI on this new token basis which is in theory more efficient at producing logic, etc. Young AI then exceeds previous AI and likewise generates an even better token basis. Process repeats until infinity. The difference between this and just “adding another hidden layer” is that errors in token basis can propagate through hidden layers, whereas a better or perfectly logical token basis will not have such errors.
Optimal shape for producing lift Produce Initial Guess for Gauss-Sidel Matrix solver
Creativity & Truth
Generative model to “create” / invent rulesets. Computational model to rigorously test if ruleset is valid and find all statements that satisfy rulesets. Generative model to iterate and extract all “useful” statements and determine if they all match observation.
Adaptive Extension
To seamlessly give AI control over new things, allow for “muscle memory”. Basically, like for geochem, start by giving ai control over keyboard and mouse position (which it already has as a general AI computer agent). Then also open output channels for creating box, sphere, cylinder, etc w/ dimension constraints, etc. These will never get fired because the AI only knows how to move mouse and hit keys. Then when AI does one of these learned actions like making a box, backpropagate the box output node to “wire” the action to more directly trigger the box creation. This way, AI can “nativize” or garner muscle memory for anything.
Collective Health Patterns
On individual scale, AI monitors all foods consumed, sleep quality, emotions, etc to detect patterns and optimizations. On global scale ai monitors people and their diseases and how things spread, etc.
Model Factory
The question is: what is the best basis of token for an AI model. LLMs suck because words are inherently flawed, subjective, etc. If we created a new language that was purely objective. Code + Math + Science. This could be situated to solve the universe much better, however, it would be impossible to create such a language with enough data for everything to work. What if instead we used LLMs to create such a language and forced the llms to work out / synthesize data and examples of this language. This then became the seed for a new model which would use the new language as its token basis and train off the LLM sythesized exmaples. Then this process would repeat continuously, sparking newer and newer models. Other llms could be used to translate this language into english!