Research Program
Research Areas
Our Phase I research divides into three categories:
- Theoretical Research
- Tools and Technologies
- System Design and Implementation (primarily Phase II)
Theoretical Research
Research Area 1: Mathematical Theory of General Intelligence
Our research in this area will focus on using algorithmic information theory and probability theory to formalize the notion of general intelligence, specifically ethical general intelligence. Important work in this area has been done by Marcus Hutter, Jürgen Schmidhuber, Shane Legg, and others, as well as by our team; but this work has not yet been connected with pragmatic AGI designs. Meeting this challenge is one of our major goals going forward. Specific focus areas within this domain include:
- Mathematical Formalization of the "Friendly AI" Concept. Proving theorems about the ethics of AI systems, an important research goal, is predicated on the possession of an appropriate formalization of the notion of ethical behavior on the part of an AI. And, this formalization is a difficult research question unto itself.
Implications of Algorithmic Information Theory for the Predictability of Arbitrarily Intelligent AIs. In 2006, Shane Legg made an interesting, ultimately failed attempt to prove algorithmic information theoretic limitations on the possibility of guaranteeing ethical behavior on the part of future AIs. This line of research however has significant potential for future exploration. Formalizing the Concept of General Intelligence. Shane Legg and Marcus Hutter published a paper in 2006 presenting a formal definition of general intelligence. Their work is excellent but can be extended in various ways; in particular, work is needed on connecting these ideas with practical intelligence tests for AGIs. Reflective Decision Theory: Extending Statistical Decision Theory to Strongly Self-Modifying Systems. Statistical decision theory, as it stands, tells us little about software systems that regularly make decisions to modify their own source code in radical ways. This deficit must be remedied if we wish to formally understand self-modifying AGI systems, their potential dangers, and potential routes to ensuring their long-term safety and beneficialness. Dynamics of Goal Structures Under Self-Modification. Under what conditions will an AGI system’s internal goal structure remain invariant as the system self-modifies? Supposing that one of the system’s top-level goals is precisely this sort of goal-system invariance – nevertheless, that is clearly not enough to guarantee invariance. Additional conditions are needed, but the nature of these conditions has not been seriously investigated. This is a deep mathematical issue in the dynamics of computational intelligence, with obvious critical implications for the creation of stably beneficial AGI. Research Area 2: Interdisciplinary Theory of AGI
One of our objectives in this area is to create a systematic framework for the description and comparison of AGI designs, concepts, and theories. We will also make selective contributions relevant to the practicalities of creating, engineering, and understanding real-world AGI systems.
- Mind Ontology: A Standardized Language for Describing AGI Systems and Related Concepts. One of the issues holding back AGI progress is that different researchers often use different languages to discuss the same things. One solution is to agree upon a standard ontology of AGI-related concepts. An initial draft of such an ontology exists, but needs extension and refinement. A description of current AGI designs and cognitive neuroscience knowledge in terms of the common ontology also needs to be undertaken.
AGI Developmental Psychology. Once an AGI is engineered, it will have to be taught: the first AGI will likely be more like an artificial baby than an artificial adult. Current theories of developmental psychology are focused on human psychological development. However, if AGI development begins by creating "baby AGI's" and teaching them gradually, we will need a theory of AGI developmental psychology to guide our work. Recent theoretical work by Ben Goertzel and Stephan Vladimir Bugaj took a small step in this direction by connecting Piagetan developmental psychology with the theory of uncertain inference; but considerably more research is required. One of the key issues here is the interdependence of ethical development with cognitive development, which is currently only moderately understood in humans, and will likely be quite different in AGIs. Research Area 3: AGI Ethical Issues
A central view of our research team is that ethical issues must be placed at the center of AGI research, rather than tacked on peripherally to AGI designs created without attention to ethical considerations. Several of our focus areas have direct implications for AGI ethics (particularly the investigation of goal system stability), but we also intend to heavily investigate several other issues related to AGI and ethics, including:
- Formalizing the Theory of Coherent Extrapolated Volition. SIAI Research Fellow Eliezer Yudkowsky has proposed "coherent extrapolated volition" (CEV) as a way of arriving at a top-level supergoal for an AI system that represents the collective desires of a population of individuals. While fascinating, the idea has only been presented informally, and a mathematical formalization seems necessary so that its practical viability can be assessed. For example, it is of interest to try to articulate formally the conditions under which the CEV of a population of individual agents, appropriately defined, will exist. This may depend on the coherence versus divergence of the beliefs or mind-states of the individuals.
Framework for Formalizing Desired Beneficial Outcomes. To create safe and beneficial AI systems, we must have a clear vision of what constitutes a beneficial outcome. The recently developed science of Positive Psychology is making great strides in understanding elements that promote human happiness. Political philosophy has studied a wide variety of approaches to structure "the good society" in a way that maximizes the benefits to its citizens. We will work toward creating a framework which formalizes these kinds of insights so that they can be considered for AI goal systems. Decision-Theoretic and Game-Theoretic Foundations for the Ethical Behavior of Advanced AIs. Microeconomics and decision theory study the nature of individual preferences and their influence on behavioral outcomes. Game theory is the core mathematical theory of decision making by interacting agents. We will use these tools to analyze the likely behavior of alternative models for the safe deployment of advanced self-modifying AIs. The preferences of an agent together with the behavior of other agents in its environment determine the actions it will take. We must design the preferences of agents so that their collective behavior produces the results we desire and is stable against internal corruption or external incursion. Tools and Technologies
This is a broad but critical area. One thing that has delayed AGI research is the scarcity of useful software tools, including for measuring ethicalness. In order to serve our R&D and the R&D of external researchers, the creation of a suite of relevant software tools will be invaluable.
Research Area 4: Customization of Existing Open-Source Projects
Our initial work in this area will focus on customizing and further developing existing open-source software projects. There are valuable, preexisting projects moving slowly due to lack of funding, which can be morphed into specific tools for aiding the creation of safe, beneficial AGI. Three main examples are the AGISim simulation world project, the Lojban language for human-machine communication, and the Mizar mathematics database.
Like any complex engineering challenge, building an AGI involves a large number of tools, some of which are quite complex and specialized. One delay of progress in AGI is the lack of appropriate tools. Each team must develop their own, which is time-consuming and distracts attention from the actual creation of AGI designs and systems. One of the key roles SIAI can play going forward is the creation of robust tools for AGI development, to be utilized in-house and by the AGI research community at large.
- AGISim, a 3D Simulation World for Interacting with AGI Systems. AGISim is an open-source project in alpha release. It is usable, but still needs more coding work done. A related task, of significant use to robotics researchers, is the precise simulation of existing physical robots within AGISim. AGISim also plays a key role in some of the AGI IQ/ethics evaluation tasks to be described below.
Lojban: A Language for Communicating with Early-Stage AGIs. Lojban is a constructed language with hundreds of speakers, based on predicate logic. Thus, it is particularly suitable for communication between humans and AGIs. A Lojban parser exists, but needs to be modified to make it output logic expressions, which will then allow Lojban to be used to converse with logic-based AGI systems. This will allow communication with a variety of AI systems in a human-usable yet relatively unambiguous way, which will be valuable for instructing AGI systems, including ethical behavior instruction. Translating Mizar to KIF. Mizar is a repository of mathematical knowledge, available online but in a complex format that is difficult to feed into AI theorem-proving systems. In six months, a qualified individual could translate Mizar to KIF, a standard predicate logic format, which would enable its use within theorem-proving AI systems, a crucial step toward AGI systems that can understand themselves and the algorithms utilized within their sourcecode. Research Area 5: Design and Creation of Safe Software Infrastructure
Some key areas of tool development are not adequately addressed by any current open-source project, for example, the creation of programming languages and operating systems possessing safety as built-in properties. SIAI researchers would not be able to complete such large, complex projects on their own, but SIAI can potentially play a leadership role by articulating detailed designs, solving key conceptual problems, and recruiting external partners to assist with engineering and testing.
- Programming Languages that Combine Efficiency with Provability of Program Correctness. In the interest of AGI safety, it would be desirable if our AGI software programs could be proved to correctly implement the software designs they represent. However, currently, there is no programming language that both supports proof-based program correctness checking, and is sufficiently efficient in terms of execution to be usable for pragmatic AGI purposes. Creating such a programming language framework will require significant advances in programming language theory.
Safe Computer Operating Systems. Is it feasible to design a provably correct operating system? In principle, yes, but this task would likely require a programming language that combines efficiency with provable correctness, as well as several interconnected breakthroughs in operating systems theory. Creating a version of Unix in a programming language that supports provable correctness would be a start, but there are many issues to be addressed. This is a research topic that requires close collaboration between a mathematician and an experienced operating systems programmer.
Research Area 6: AGI Evaluation Mechanisms
The creation of safe, beneficial AGI would be hastened if there were well-defined, widely-accepted means of assessing general intelligence, safety, and beneficialness. The provision of such means of assessment is a tractable task that fits squarely within the core mission of the Institute.
A few comments regarding AGI intelligence testing must be inserted here, as general context. IQ tests are a controversial but somewhat effective mechanism for assessing human intelligence. Narrow AI software is evaluated by a variety of mechanisms appropriate to the various domains in which it operates. AGI software, on the other hand, is not currently associated with any generally accepted evaluation mechanism. The Turing Test and variations purport to assess the effectiveness of AGI systems at emulating human intelligence, but have numerous shortcomings: not all AGI systems will necessarily aim at the emulation of human intelligence; and furthermore, these tests do not provide any effective way of assessing the continual progress of AGIs toward more and more general intelligence. The Loebner Prize, a chat-bot contest, purports to assess the progress of AI systems toward general intelligence in a conversational fluency context, but its shortcomings have been well documented. It is with this background in mind that we propose to devote some effort to the creation of intelligence evaluation mechanisms focused specifically on AGI. We do not expect this to lead to any single, definitive "AGI IQ test," but rather to a host of evaluation mechanisms that are useful to AGI researchers in assessing and comparing their systems. Among the most innovative and powerful mechanisms we suggest are ones involving assessing AGI systems’ behaviors within the AGISim simulation world.
Assessing the ethicalness of an AGI’s behavior and cognition is a matter even less studied. Our primary focus in this regard will be on the creation of "ethical behavior rubric" in the form of scenarios within the AGISim world. This sort of assessment does not provide any sort of absolute guarantee of an AGI system’s safety or beneficialness, but nevertheless will allow a far more rigorous assessment than any approach now available. We consider it important that work in this area begins soon, so that "ethics testing" becomes accepted as a standard part of AGI R&D.
- Recognizing Situational Entailment Challenge. We plan to extend the "Recognizing Textual Entailment" challenge by defining a "Recognizing Situational Entailment" challenge, in which AI systems are challenged to answer simple English questions about "simulation world movies" that they are shown. The movies will be generated using the AGISim framework. An annual workshop to address this challenge may be organized as part of a recognized AI conference.
Development of a Suite of Benchmark Learning Tasks within AGISim. Within the context of the AGISim world, we will develop a set of tasks on which any AGI system can be tested, e.g. playing tag, imitating behaviors, imitating structures built from blocks, etc. Having a consistent set of benchmark tasks for comparing different AGI approaches is important for coordination of progress in the field. Development of a Suite of Benchmark Ethics Tests within AGISim. Just as one can test intelligence through AGISim scenarios, one can also test ethics, by placing the AGI in situations where it must interact with other agents, assessing the ethical sensitivity of its behaviors. Testing within such scenarios should become a standard part of assessing the nature of any new AGI architecture. Porting of Human IQ Tests to AGIs. To what extent are human IQ tests overly human-centric? Can we create variants of the IQ tests administered to humans that are more appropriate for AIs? It may be that different variants must be created for different AIs, e.g. based on the nature of the AIs embodiment and sensory organs. Investigating the variation of IQ questions based on the nature of the intelligent system being tested, is one way to probe the core of intelligence. System Design and Implementation
Research Area 7: AGI Design
This is arguably the most critical component of the path to AGI. As noted earlier, AGI design and engineering will be our central focus in Phase II. In Phase I, however, our work in this area will focus on the comparison and formalization of existing AGI designs. This is crucial, as it will lead to a better understanding of the strong and weak points in our present understanding of AGI, and form the foundation for creating new AGI designs, as well as analyzing and modifying existing AGI designs.
- Systematic Comparison of AGI Designs. A number of designs for AGI have been proposed, some in the public literature, with varying levels of detail. What are their major overlaps, major common strengths, and major common weaknesses? The first step toward resolving this may be to describe the various systems using a common vocabulary, such as the Mind Ontology Project.
Research Area 8: Cognitive Technologies
Our in-house R&D is founded, in part, on the premise that appropriate use of probability theory is likely to play an important role in the development of safe, beneficial AGI. With this in mind, the "cognitive technologies" aspect of our Phase I centers on the creation of several cognitive components utilizing probability theory to carry out operations important to any AGI.
Our research in this area will differ from most work on probabilistic AI due to our focus on generality of scope rather than highly specialized problem-solving. In order to reason probabilistically about real-world situations, including situations where ethical decisions must be made, powerful probabilistic reasoning tools will be needed, and tools different-in-kind than ones currently popular for narrow-AI applications.
- Efficient Techniques for Managing Uncertainty in Large Dynamic Knowledge Bases. Using Bayesian probability techniques is not sufficiently computationally efficient to be a pragmatic approach to AGI on present systems. Approximations are needed, which achieve efficiency without losing too much accuracy. A variety of approaches are possible here, and need to be fleshed out mathematically and computationally, and compared to each other. For example, work on loopy Bayes nets, imprecise and indefinite probabilities, and probabilistic logic networks is relevant here.
Probabilistic Evolutionary Program Learning. One of the more powerful optimization techniques available is "probabilistic evolutionary learning," or Estimation of Distribution Algorithms (EDAs). Recent research by Moshe Looks has extended EDAs to automated program learning, but the state of the art only allows automated learning of relatively simple programs. Extension of this paradigm is necessary, to allow learning of programs involving recursion and other complex programming constructs. Probabilistic Inference Driven Self-Modification of Program Code. Is it possible to write code that uses probabilistic reasoning to model its own behavior, and then modifies itself accordingly? Two proprietary AGI designs, Novamente and Self-Aware Systems, use aspects of this idea. However, there is no general theory covering this kind of algorithm, and many possible approaches may be viable.
Thursday, April 7, 2011
Research Areas | Singularity Institute for Artificial Intelligence
via singinst.org
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment