01325nas a2200145 4500008004100000245007000041210006900111520084200180653003101022653002701053100001801080700002101098700002001119856004001139 2010 eng d00aExtracting Reduced Logic Programs from Artificial Neural Networks0 aExtracting Reduced Logic Programs from Artificial Neural Network3 aArtificial neural networks can be trained to perform excellently in many application areas. While they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are *black boxes*. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as *simple* as possible - where *simple* is being understood in some clearly defined and meaningful way.10aartificial neural networks10areduced logic programs1 aLehmann, Jens1 aBader, Sebastian1 aHitzler, Pascal uhttp://knoesis.wright.edu/node/162001527nas a2200181 4500008004100000245005900041210005700100300001400157520094400171653003501115653003101150653003201181653002701213100002101240700002401261700002001285856004001305 2008 eng d00aConnectionist Model Generation: A First-Order Approach0 aConnectionist Model Generation A FirstOrder Approach a2420-24323 aKnowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structure-sensitive processes as expressed e.g., by means of first-order predicate logic, it is not obvious at all what neural symbolic systems would look like such that they are truly connectionist, are able to learn, and allow for a declarative reading and logical reasoning at the same time. The core method aims at such an integration. It is a method for connectionist model generation using recurrent networks with feed-forward core.We show in this paper how the core method can be used to learn first-order logic programs in a connectionist fashion, such that the trained network is able to do reasoning over the acquired knowledge. We also report on experimental evaluations which show the feasibility of our approach.10aConnectionist Model Generation10aFirst-Order Logic Programs10aNeural-Symbolic Integration10aRecurrent RBF Networks1 aBader, Sebastian1 aHolldobler, Steffen1 aHitzler, Pascal uhttp://knoesis.wright.edu/node/162201531nas a2200145 4500008004100000245008300041210006900124520103900193653002801232100002101260700002401281700002001305700002001325856004001345 2007 eng d00aThe Core Method: Connectionist Model Generation for First-Order Logic Programs0 aCore Method Connectionist Model Generation for FirstOrder Logic 3 aIn Artificial Intelligence, knowledge representation studies the formalisation of knowledge and its processing within machines. Techniques of automated reasoning allow a computer system to draw conclusions from knowledge represented in a machine-interpretable form. Recently, ontologies have evolved in computer science as computational artefacts to provide computer systems with a conceptual yet computational model of a particular domain of interest. In this way, computer systems can base decisions on reasoning about domain knowledge, similar to humans. This chapter gives an overview on basic knowledge representation aspects and on ontologies as used within computer systems. After introducing ontologies in terms of their appearance, usage and classification, it addresses concrete ontology languages that are particularly important in the context of the Semantic Web. The most recent and predominant ontology languages and formalisms are presented in relation to each other and a selection of them is discussed in more detail.10aArtificial Intelligence1 aBader, Sebastian1 aHolldobler, Steffen1 aWitzel, Andreas1 aHitzler, Pascal uhttp://knoesis.wright.edu/node/204400907nas a2200157 4500008004100000245008100041210006900122260002100191300001200212520040000224100002100624700002400645700002000669700002000689856004000709 2007 eng d00aA Fully Connectionist Model Generator for Covered First-Order Logic Programs0 aFully Connectionist Model Generator for Covered FirstOrder Logic aHyderabad, India a666-6713 aWe present a fully connectionist system for the learning of first-order logic programs and the generation of corresponding models: Given a program and a set of training examples, we embed the associated semantic operator into a feed-forward network and train the network using the examples. This results in the learning of first-order knowledge while damaged or noisy data is handled gracefully.1 aBader, Sebastian1 aHolldobler, Steffen1 aWitzel, Andreas1 aHitzler, Pascal uhttp://knoesis.wright.edu/node/120601472nas a2200133 4500008004100000245007900041210006900120260003500189520100600224100002101230700002701251700002001278856004001298 2005 eng d00aComputing First-Order Logic Programs by Fibring Artificial Neural Networks0 aComputing FirstOrder Logic Programs by Fibring Artificial Neural aClearwater Beach, Florida, USA3 aThe integration of symbolic and neural-network-based artificial intelligence paradigms constitutes a very challenging area of research. The overall aim is to merge these two very different major approaches to intelligent systems engineering while retaining their respective strengths. For symbolic paradigms that use the syntax of some first-order language this appears to be particularly difficult. In this paper, we will extend on an idea proposed by Garcez and Gabbay (2004) and show how first-order logic programs can be represented by fibred neural networks. The idea is to use a neural network to iterate a global counter n. For each clause C_{i} in the logic program, this counter is combined (fibred) with another neural network, which determines whether C_{i} outputs an atom of level *n* for a given interpretation *I*. As a result, the fibred network computes the singlestep operator T_{P} of the logic program, thus capturing the semantics of the program.1 aBader, Sebastian1 aGarcez, Artur, S. D'A.1 aHitzler, Pascal uhttp://knoesis.wright.edu/node/120000353nas a2200097 4500008004100000245006800041210006500109100002100174700002000195856004000215 2005 eng d00aDimensions of Neural-Symbolic Integration - A Structured Survey0 aDimensions of NeuralSymbolic Integration A Structured Survey1 aBader, Sebastian1 aHitzler, Pascal uhttp://knoesis.wright.edu/node/204501211nas a2200109 4500008004100000245007000041210006900111520084200180100001801022700002101040856004001061 2005 eng d00aExtracting Reduced Logic Programs from Artificial Neural Networks0 aExtracting Reduced Logic Programs from Artificial Neural Network3 aArtificial neural networks can be trained to perform excellently in many application areas. While they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are *black boxes*. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as *simple* as possible - where *simple* is being understood in some clearly defined and meaningful way.1 aLehmann, Jens1 aBader, Sebastian uhttp://knoesis.wright.edu/node/180900759nas a2200121 4500008004100000245006800041210006700109520035000176100002100526700002000547700003000567856004000597 2005 eng d00aOntology Learning as a Use Case for Neural-Symbolic Integration0 aOntology Learning as a Use Case for NeuralSymbolic Integration3 aWe argue that the field of neural-symbolic integration is in need of identifying application scenarios for guiding further research. We furthermore argue that ontology learning - as occuring in the context of semantic technologies - provides such an application scenario with potential for success and high impact on neural-symbolic integration.1 aBader, Sebastian1 aHitzler, Pascal1 aGarcez, Artur, S. D'Avila uhttp://knoesis.wright.edu/node/231801273nas a2200133 4500008004100000245013500041210006900176260001700245520077200262100002101034700002401055700002001079856004001099 2004 eng d00aThe Integration of Connectionism and First-Order Knowledge Representation and Reasoning as a Challenge for Artificial Intelligence0 aIntegration of Connectionism and FirstOrder Knowledge Representa aTokyo, Japan3 aIntelligent systems based on first-order logic on the one hand, and on artificial neural networks (also called connectionist systems) on the other, differ substantially. It would be very desirable to combine the robust neural networking machinery with symbolic knowledge representation and reasoning paradigms like logic programming in such a way that the strengths of either paradigm will be retained. Current state-of-the-art research, however, fails by far to achieve this ultimate goal. As one of the main obstacles to be overcome we perceive the question how symbolic knowledge can be encoded by means of connectionist systems: Satisfactory answers to this will naturally lead the way to knowledge extraction algorithms and to integrated neural-symbolic systems.1 aBader, Sebastian1 aHolldobler, Steffen1 aHitzler, Pascal uhttp://knoesis.wright.edu/node/119201254nas a2200133 4500008004100000245009200041210006900133300001300202520080100215653002301016100002101039700002001060856004001080 2004 eng d00aLogic Programs, Iterated Function Systems, and Recurrent Radial Basis Function Networks0 aLogic Programs Iterated Function Systems and Recurrent Radial Ba a273- 3003 aGraphs of the single-step operator for first-order logic programs -displayed in the real plane - exhibit self-similar structures known from topological dynamics, i.e. they appear to be *fractals*, or more precisely, attractors of iterated function systems. We show that this observation can be made mathematically precise. In particular, we give conditions which ensure that those graphs coincide with attractors of suitably chosen iterated function systems, and conditions which allow the approximation of such graphs by iterated function systems or by fractal interpolation. Since iterated function systems can easily be encoded using recurrent radial basis function networks, we eventually obtain connectionist systems which approximate logic programs in the presence of function symbols.10aiterated functions1 aBader, Sebastian1 aHitzler, Pascal uhttp://knoesis.wright.edu/node/1635