The project is aimed at creating an easy accessed and articulated visual representation of opencog’s atomspace. In which each node and link is graphically represented in a way that there is access to see a single node and the links associated with it with a quick access in which the page of will reload the specific node required from the general graph effortlessly. Tsega, recently, introduce new java script library called D3 to the current visualizer that will allow the representation of the OpenCoge AtomSpace content as hypergraph using forced layout which effectively convey information and relationships between them.
The objective of this project is to fix problems that exist in some parts of opencog/embodiment by implement thread safety in embodiment, so that a portion of program or routine in embodiment can be called from multiple programming threads without unwanted interaction between the processes. Alongside, there is somewhat called Action Interface method within embodiment used to specify actions that an opencog agent will do in the game world. This method was considered very difficult to modify in that it’s spread out over many files. The Programmer also uses simple action interfaces using JSON script to check embodiment in turtle graphics. In addition it is used to check and make changes to Action Interface so that the programmer can create personal actions and see these actions executed in the embodiment.
The project goal is to use openpsi(within OpenCog) to control the behavior of a simulated robot in blender using behavior trees. Hence, there should be: three Goal Nodes which are novelty-seeking (N), safety-seeking (S) and please the human (P). In addition to these there are two behaviors: A) look at user when they’re talking, and respond to them and B) look at something salient when it pops up in the environment. As a result the task requires configuration of OpenPsi which is connection between OpenCog (where OpenPsi lives) and Owyl, for sending of action signals and connection between OpenCog and the Blender simulation world, for sending of information regarding salience and whether the user is talking. The programmer uses python and rospy(for the two way communication between openpsi and blender) languages.
Assigned Programmer-> Natnael Shewangizaw
Client-> OpenCog
Project Schedule-> Started on March 2015. Dead line May 2015
The project goal is to add several modifications on the original Robosapiens Toy robots. As a the original version functions only through remote controllers via infrared, the project is aimed at changing this controlling system with WiFi connections so users can control the robot with smart phones, tablets or computers. The original version has no vision processor, and this project will also upgrade this future enabling the robots to have an inbuilt video camera with the ability to sense depth. Then it will be equipped with AI functionality which will enable it to take pictures on its own or navigate its way with no help from humans.
