Wednesday, May 9, 2018

HELIOS RUEHLS SCIENCE REPORT

AN ELECTRONIC EYE

eye anatomy
The Human Eye, over simplifying a bit;  a sort of photon catching and sorting machine

 OPTICAL PHYSICISTS ARE DEVELOPING A TRUE ELECTRONIC "EYE" FAR DIFFERENT FROM A CAMERA.


 The U.S. Defense Advanced Research Agency (DARPA) to whom   Helios Ruehls has on occasion applied for research grants, is highly interested in the evolution of autonomous drones. How autonomous a drone can be is determined by the level of artificial intelligence that can be applied and the purpose of the drone. There are few moral considerations in terms of combat with drone intelligence gathering and recon. But drones carrying and capable of applying deadly force, at present require a human in the loop, particularly the shoot don't shoot decision making loop. But even a weaponized drone may be equipped to proceed to the target area on its own, orbit a predetermined area, and "check in" with its human controller when ready to begin the next phase of operations. 

 Despite the availability of high resolution TV cameras small enough to be no burden on today's drones air lift capacity, drones don't actually "see". What humans actually perceive when they "see" is a processed image. Our "vision" is created only in part by the image on our Retina. Our actual vision is a "processed" one created in the brain. The retina receives many picture elements that in camera and optical physics language we call "pixels". These pixels are transmitted to the brain via the optic nerve. However, the optic nerve compared to the brain has a very limited capacity for the transmission of these "pixels" in computer terms we might say the optic nerve has "limited bandwidth".
The retina sends only changes in the basic pixel array as they occur and the brain maintains the "basic image" and adjusts it for the changes. 
Related image
Simple camera imaging system


By comparison camera / computer "vision" programs must create large images and then compare them bit by bit using complex algorithms to comprehend what is happening. Such programming has worked on large terrain change navigation systems, but not so well on looking for humans moving between and among buildings. A live TV camera trained on the compound and observed by a human has been the only trust worthy solution so far. The inability to really "see" has imposed a limit on artificial intelligence used in autonomous or near autonomous drones.  Beyond the mission related ethical / moral considerations lack of real "vision" impedes the development of certain aspects of drone self executing navigational systems.

 Long before we resolve the law of armed conflict ethical issues of when there must be a human "controller" in the loop for weaponized drones we are going to have to improve the ability of drones to see. Not all navigational tasks can be solved by an AI drone knowing its longitude and latitude. We need among other things micro drones that can determine what room in a house they are in. Humans often make up in many ingenious ways for damage to their "sensor array" (sight, hearing, smell, touch). However  for many human activities no matter how intelligent the human, a full sensory array is needed. We can't, for example" utilize blind cops or air craft pilots. Sensory perception can enhance or limit the application of "intelligence". 

 Darpa is making progress on developing true artificial vision to operate in tandem with artificial intelligence. But don't expect the resulting technology to be very closely held by US defense Department interests. DARPA is providing partial funding of an organization based in Zurich, Switzerland known as "iniLabs"which is blazing a trail in the realm of artificial vision. The resulting findings aren't likely to become the exclusive property of the US Department of Defense. 

 The human, or for that matter bird eye/ brain system, does not have to continuously generate large complex images and then compare them bit by bit. The eye/ brain system generates an image the bulk of which is held by the brain the rest of which is capable of receiving rapid small changes. To the optical physicists of "iniLabs" these changes are called "events". Their system which they call a "Dynamic Vision Sensor" combines a conventional (camera like) "imager"with a new invention called an "Event Monitor". 

In experiments so far with their "Dynamic Vision Sensor", the "iniLab" device has proven the system to be about 130% more accurate than previous devices relying only on event sensing and about 85% more accurate than a camera image.The combination is definitely a big improvement over camera based big image comparison computer applications and produces a far more graphic image than previous "event sensors". The lesson of the eye / brain system is that improvement in sensor provided information processing power nets a greater return than improvements in the actual sensor. In humans, the eye developed to the point where it could transmit depth (at least in tandem sensors) and color, but the optical nerve never really increased in carrying (think band width) capacity. It didn't need to , the brain's interpretative ability increased faster and no real need for improvement was needed in the actual sensors (eyes) or transmitters (optical nerves) 

 Something similar is going on in artificial vision. There is more research money available for computer science than optical physics and computer processing speed continues to expand much faster than optical physics is developing more accurate image capture and transmission systems. The "intiLab" project may indicate that we are reaching a long term apex in the optical physics part of the artificial vision evolution, but that may not be any more of a block to progress to artificial vision than it was in the evolution of human vision. 

WHAT IT ALL MEANS FOR HELIOS RUEHLS STOCK HOLDERS: 

  Helios Ruehls, Inc has had some experience with both optical imaging and human vision gleaned in our "Yellow Lens Project". We had interactions between optical physicists and optometrists, and ophthalmologists. We developed some institutional understanding and some academic connections but so far we lack the in network expertise in computer science to play a major role in the development of artificial vision technology. But we don't need a major role to get our foot in the door. 

 There is a great interest in artificial vision beyond the US Department of Defense. We must be on the look out for under served opportunities in the commercial sector, and use our understanding of the artificial vision concept to spot potential investment opportunities for our "endowment fund".  Our potential for entry into the developmental research for artificial vision is another reason for us to consider revising our corporate vision to encompass a more broadly based "skunk works" approach to more types of optical physics work than the pursuit of the fractal lens. We can not drop projects like the fractal lens but the academic / economic politics of the moment are far from optimal for large grant funding. Contract research in the search for artificial vision won't yield us big intellectual property profits but could start to generate regular income. 

 It is time to consider the question, can we survive if our only activity is the development of our own exclusive intellectual property? Is the combination of selling supportive temporary research services not necessary both to an earlier generation of revenue but also as an attractant to our academic corporate participants (":Sweat equity partners"). Our corporate experience so far informs us there will always be political, military, and academic politics affecting research funding. While we must push for funding of our projects that can give us the big pay offs in terms of proprietary intellectual property, selling research support services on the optical physics projects currently in vogue is a wise idea. Going into publishing is a another wise idea that keeps us in the public eye , offers our academic sweat equity stock holders something they value, and generates useful revenue.  



No comments:

Post a Comment