A system of distributed robotics, incorporating neural network based methods of communication, navigation, input processing, and mapping, is proposed for improved performance and environmental interaction. Each robot contains structures to learn from an “expert system” (ideal algorithm), other robots in the system, and self-feedback. A simulation of the robots ability to achieve a simple goal has been created in C++, where 100 observations of the goal’s completion time were measured over 3 trials. Interestingly, results showed an insignificant difference in accuracy between the robots trained from an “expert system” and other robots, and a faster error reduction for the robot-trained system. The neural network has been implemented in a dense, parallel, design using stochastic bitstream arithmetic on a programmable logic (FPGA) integrated circuit. SMIA cell phone cameras are used for visual input to each robot. FPGA interface code has been created in VHDL for receiving the camera data, which is then processed using the SIFT algorithm to find important feature points. A neural-directed database lookup and creation is used for object recognition, which receives feedback from other robots to ensure a standard method is formed. Local maps are created and shared based on the feedback of multiple robots; navigation takes place by finding a target on the local map and using self-feedback until the target is reached. Inter-robot feedback has been found as a viable possibility for distributed robotics, and may be used in future systems for improved performance in minefield clearing, planetary exploration, military surveillance, and other applications.