TY - JOUR
T1 - Extracting Reduced Logic Programs from Artificial Neural Networks
Y1 - 2010
A1 - Jens Lehmann
A1 - Sebastian Bader
A1 - Pascal Hitzler
KW - artificial neural networks
KW - reduced logic programs
AB - Artificial neural networks can be trained to perform excellently in many application areas. While they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are *black boxes*. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as *simple* as possible - where *simple* is being understood in some clearly defined and meaningful way.
ER -
TY - JOUR
T1 - Connectionist Model Generation: A First-Order Approach
JF - Neurocomputing
Y1 - 2008
A1 - Sebastian Bader
A1 - Steffen Holldobler
A1 - Pascal Hitzler
KW - Connectionist Model Generation
KW - First-Order Logic Programs
KW - Neural-Symbolic Integration
KW - Recurrent RBF Networks
AB - Knowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structure-sensitive processes as expressed e.g., by means of first-order predicate logic, it is not obvious at all what neural symbolic systems would look like such that they are truly connectionist, are able to learn, and allow for a declarative reading and logical reasoning at the same time. The core method aims at such an integration. It is a method for connectionist model generation using recurrent networks with feed-forward core.We show in this paper how the core method can be used to learn first-order logic programs in a connectionist fashion, such that the trained network is able to do reasoning over the acquired knowledge. We also report on experimental evaluations which show the feasibility of our approach.
ER -
TY - CHAP
T1 - The Core Method: Connectionist Model Generation for First-Order Logic Programs
Y1 - 2007
A1 - Sebastian Bader
A1 - Steffen Holldobler
A1 - Andreas Witzel
A1 - Pascal Hitzler
KW - Artificial Intelligence
AB - In Artificial Intelligence, knowledge representation studies the formalisation of knowledge and its processing within machines. Techniques of automated reasoning allow a computer system to draw conclusions from knowledge represented in a machine-interpretable form. Recently, ontologies have evolved in computer science as computational artefacts to provide computer systems with a conceptual yet computational model of a particular domain of interest. In this way, computer systems can base decisions on reasoning about domain knowledge, similar to humans. This chapter gives an overview on basic knowledge representation aspects and on ontologies as used within computer systems. After introducing ontologies in terms of their appearance, usage and classification, it addresses concrete ontology languages that are particularly important in the context of the Semantic Web. The most recent and predominant ontology languages and formalisms are presented in relation to each other and a selection of them is discussed in more detail.
ER -
TY - CONF
T1 - A Fully Connectionist Model Generator for Covered First-Order Logic Programs
T2 - Twentieth International Joint Conference on Artificial Intelligence, IJCAI-07
Y1 - 2007
A1 - Sebastian Bader
A1 - Steffen Holldobler
A1 - Andreas Witzel
A1 - Pascal Hitzler
AB - We present a fully connectionist system for the learning of first-order logic programs and the generation of corresponding models: Given a program and a set of training examples, we embed the associated semantic operator into a feed-forward network and train the network using the examples. This results in the learning of first-order knowledge while damaged or noisy data is handled gracefully.
JA - Twentieth International Joint Conference on Artificial Intelligence, IJCAI-07
CY - Hyderabad, India
ER -
TY - CONF
T1 - Computing First-Order Logic Programs by Fibring Artificial Neural Networks
T2 - Eighteenth International Florida Artificial Intelligence Research Symposium Conference
Y1 - 2005
A1 - Sebastian Bader
A1 - Artur S. D'A. Garcez
A1 - Pascal Hitzler
AB - The integration of symbolic and neural-network-based artificial intelligence paradigms constitutes a very challenging area of research. The overall aim is to merge these two very different major approaches to intelligent systems engineering while retaining their respective strengths. For symbolic paradigms that use the syntax of some first-order language this appears to be particularly difficult. In this paper, we will extend on an idea proposed by Garcez and Gabbay (2004) and show how first-order logic programs can be represented by fibred neural networks. The idea is to use a neural network to iterate a global counter n. For each clause C_{i} in the logic program, this counter is combined (fibred) with another neural network, which determines whether C_{i} outputs an atom of level *n* for a given interpretation *I*. As a result, the fibred network computes the singlestep operator T_{P} of the logic program, thus capturing the semantics of the program.
JA - Eighteenth International Florida Artificial Intelligence Research Symposium Conference
CY - Clearwater Beach, Florida, USA
ER -
TY - CHAP
T1 - Dimensions of Neural-Symbolic Integration - A Structured Survey
Y1 - 2005
A1 - Sebastian Bader
A1 - Pascal Hitzler
ER -
TY - CONF
T1 - Extracting Reduced Logic Programs from Artificial Neural Networks
Y1 - 2005
A1 - Jens Lehmann
A1 - Sebastian Bader
AB - Artificial neural networks can be trained to perform excellently in many application areas. While they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are *black boxes*. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as *simple* as possible - where *simple* is being understood in some clearly defined and meaningful way.
ER -
TY - CONF
T1 - Ontology Learning as a Use Case for Neural-Symbolic Integration
T2 - Ontology Learning as a Use Case for Neural-Symbolic Integration
Y1 - 2005
A1 - Sebastian Bader
A1 - Pascal Hitzler
A1 - Artur S. D'Avila Garcez
AB - We argue that the field of neural-symbolic integration is in need of identifying application scenarios for guiding further research. We furthermore argue that ontology learning - as occuring in the context of semantic technologies - provides such an application scenario with potential for success and high impact on neural-symbolic integration.
JA - Ontology Learning as a Use Case for Neural-Symbolic Integration
ER -
TY - CONF
T1 - The Integration of Connectionism and First-Order Knowledge Representation and Reasoning as a Challenge for Artificial Intelligence
T2 - Third International Conference on Information
Y1 - 2004
A1 - Sebastian Bader
A1 - Steffen Holldobler
A1 - Pascal Hitzler
AB - Intelligent systems based on first-order logic on the one hand, and on artificial neural networks (also called connectionist systems) on the other, differ substantially. It would be very desirable to combine the robust neural networking machinery with symbolic knowledge representation and reasoning paradigms like logic programming in such a way that the strengths of either paradigm will be retained. Current state-of-the-art research, however, fails by far to achieve this ultimate goal. As one of the main obstacles to be overcome we perceive the question how symbolic knowledge can be encoded by means of connectionist systems: Satisfactory answers to this will naturally lead the way to knowledge extraction algorithms and to integrated neural-symbolic systems.
JA - Third International Conference on Information
CY - Tokyo, Japan
ER -
TY - JOUR
T1 - Logic Programs, Iterated Function Systems, and Recurrent Radial Basis Function Networks
JF - Journal of Applied Logic
Y1 - 2004
A1 - Sebastian Bader
A1 - Pascal Hitzler
KW - iterated functions
AB - Graphs of the single-step operator for first-order logic programs -displayed in the real plane - exhibit self-similar structures known from topological dynamics, i.e. they appear to be *fractals*, or more precisely, attractors of iterated function systems. We show that this observation can be made mathematically precise. In particular, we give conditions which ensure that those graphs coincide with attractors of suitably chosen iterated function systems, and conditions which allow the approximation of such graphs by iterated function systems or by fractal interpolation. Since iterated function systems can easily be encoded using recurrent radial basis function networks, we eventually obtain connectionist systems which approximate logic programs in the presence of function symbols.
ER -