Saturday, July 30, 2011

Military Intelligence Looks for Integrative Cognitive Neuroscience Computer Architectures

I am going to to take a little side trip off of me and my therapies and think about other things relating to the brain.  My Gentle Readers know  I am all things about the brain and that periodically I go on rambles through various aspects of  other disciplines such as  technology, ecology, etc as they relate to brainSo, Gentle Readers pick up your hiking staffs and lets venture a little further afield from my latest therapies.
Results from an fMRI experiment in which peopl...Image via Wikipedia
I saw this article about military intelligence efforts to model the brain and particularly, for processing video.  For those of you not in the know about military acronyms.  IARPA is the Intelligence Advanced Research Projects Activity research agency that reports to the Director of National Intelligence.  It was copied from DARPA the Defense Advanced Research Projects Activity that is part of the Defense Department.  In addition to basic research on purely military technologies, DARPA helped kick start things like the  Internet and GPS into being.

BBN – these days owned by US arms giant Raytheon – says it has won a $3m deal from IARPA to "explore how the brain processes massive amounts of fragmented data". The funding comes from IARPA's backronym-tastic Integrated Cognitive-Neuroscience Architectures for Understanding Sensemaking (ICARUS) programme.

Like its military counterpart (Darpa [sic]) , IARPA focuses on high-risk research: that is, on research which is unlikely to deliver anything (and if it does, as in the case of the internet, what it delivers may be something quite other than what was expected). Thus there's no great likelihood that we'll see brain-like computers able to interpret information as well as a human can in a few years as a result of yesterday's deal.

If we do, though, BBN believes that the new gear will help the US spooks with various tasks they struggle to achieve today – in particular that of getting useful intelligence out of huge video files delivered by various forms of overhead surveillance.

When I tool about the various proposals here, I really don't see anything that any type of sensory cognition be it visual cognition, auditory cognition, etc. to the higher order functions of sense making.  I am wondering how limited this approach will be.  It seems that some computer professionals have met up with neuroscientists and they are looking at all kinds of information processing models that vaguely seem like computer models thwacked on top of a brain and not much to do with how visual processing relates to higher order cognition.  Somehow, this is ignoring a lot of how the visual system operates to collect visual information.  It seems that people responding to this proposal are looking at the visual system as a pipe that dumps encoded representations of sight into a machine that will process the contextual information. 

Here is a developmental optometrist's view on pattern recognition

 I am wondering if there is something more in understanding the neural substrates of vision that would help out computer scientists with automating pattern recognition.

Enhanced by Zemanta