Cognitive Scientist, Computer Scientist
Understand the brain
I'm a cross-disciplinary scientist, lecturer and scientific editor. I'm also interested in entrepreneurship, and in investing and market speculation by the aid of data science, cognitive science, geopolitics and other knowledge. Data science is the interdisciplinary study of methods, processes and systems to extract insights and knowledge from data.
Though I have degrees in both cognitive science and computer science, I'm also to a large extent an autodidact.
My research interests are wide and include, but are not limited to, artificial intelligence (in particular the subfield known as computational intelligence/soft computing), cognitive science, cognitive neuroscience, cognitive modelling, cognitive robotics, computational philosophy, data science, cognitive architectures, the philosophy of mind as well as in epistemological and other philosophical issues. I have a strong interest in self-organization and emergence, cybernetics and in taking inspiration from biology and nature, which I think shines through in my approaches.
My interests increasingly and simultaneously drift toward, on the one hand Biologically inspired AI/Cognitive Science/Cognitive Modelling/Philosophy and on the other toward applications to Entrepreneurship/Investing/Market Speculation.
A lot of my research have been related to the field Artificial Intelligence. In this discipline you study how to create machines and computer programs that display intelligent behaviours. Artificial intelligence involves the sub-discipline machine learning where you study how to create machines and computer programs that can learn. A particular type of computer models called artificial neural networks are algorithms (e.g., expressed in the form of computer programs) capable of learning. Artificial neural networks are signal flow models that try to emulate, or are inspired by, the function of biological neural networks, i.e. such as those we find in human and animal nervous systems.
There are different types of artificial neural networks, but common to all of them is that they learn through examples. One type of artificial neural network needs to be subjected to both stimuli and responses during the learning process. Another type requires only stimuli and organizes itself into an ordered representation of the stimuli to which the model has been exposed. An example of the latter type is what is called a self-organizing map.
Within cognitive neuroscience, you study how the brain enables mental phenomena, thoughts, perceptions, behaviours and feelings. One application for this knowledge is to inspire and propose appropriate design of cognitive architectures (e.g. implemented as computer programs) that can be used in cognitive robotics, where you attempt to develop robots with cognitive architectures that allow the robots to learn and develop perceptions and adequate behaviours through its interaction with its surroundings.
In the philosophy of mind, one studies the nature of consciousness.
I have two main goals with what I do, trying to understand how brains, cognition (thinking) and consciousness work from a systems perspective through computer modelling, and to develop artificial intelligence methods and artificial cognitive architectures. I am interested in applications of artificial intelligence to solve practical problems, for example in medical diagnostics and related decision support, forecasting, strategy and control of robots. I'm also interested in entrepreneurship/commercialization and thus I explore my ideas from an entrepreneurial perspective too. I am interested in the philosophy of mind, a field that studies the nature of consciousness and qualia (i.e. the subjective quality of conscious experiences), as well as in epistemological and other philosophical questions.
I have done a lot of research in artificial neural networks and other types of machine learning, as well as machine learning applications and other applications, such as artificial intelligence in medical diagnostics and data mining. I have a particularly strong interest in self-organizing processes and hence in self-organizing neural networks, and I have invented some new variants of the self-organizing map invented by Teuvo Kohonen. A self-organizing map consists of a matrix of artificial neurons that after a learning phase can represent a certain type of stimulus in an orderly manner, for example, by exposition to a variety of colours, it can learn to represent these so that all blue shades are arranged in a certain part of the map, while all green and yellow lays represented in other parts. The transitions in the representations are gradual.
One of the variants I have invented is the associative self-organizing map. This was first created to be used as a building block (which roughly models a cortical area in the brain) in artificial cognitive architectures. An associative self-organizing map can learn to associate the activity in its self-organized representation of input data with arbitrarily many sets of parallel inputs. For example, it can learn to associate its activity with the activity of another associative self-organization map or with its own activity at one or more earlier times. This allows cross-modal expectations, for example, if a sensory modality, say the visual system in a cognitive architecture, produces a certain internal pattern of activity due to sensory input, then activity patterns are elicited in other sensory modalities corresponding to the patterns of activity that are often triggered in these other sensory modalities through sensory inputs that usually occur simultaneously, even when they do not. For example, this allows the creation of an expectation and imagination of the sound of thunder in the auditory system after the visual perception of a thunderbolt. The use of associative self-organizing maps also enables what can be seen as mental imagination in an artificial cognitive architecture through a mechanism called internal simulation. According to the internal simulation hypothesis, proposed by Germund Hesslow, and which is related to the mirror neuron theory, this is a decisive mechanism in the imagination of humans (and other animals). The internal simulation hypothesis suggests that when we imagine that we experience something or that we act in a certain way, similar neural activity patterns are developed in the same brain areas as if we actually had these experiences or as if we actually acted in the way we imagined. The neural activity patterns produced by imagination thus correspond to those that would have been developed if the stimuli / response sequences actually had taken place.
I have researched how people's and other creature's recognition of other's actions (gestures, behaviours ...) and understanding of other's intentions are implemented in the brain, and in particular how similar abilities can be artificially created in action recognition systems.
I am very interested in modelling in general for all types of applications, in industry, in medicine, economics etc. I have been involved, for example, in simulating the reorganization of the somatosensory cerebral cortex after damage to the nerve between the hand and the brain, and in modelling of urological dysfunction. I have also worked extensively with biologically inspired touch perception in robots, and I have designed and built some (now outdated) robots for this purpose.