神经网络引擎的传输结构

The Transport Mechanism 传输结构

Note: as of Joone v. 2.0 a new single-thread engine has been written in order to improve the performances on machines with multi-core CPUs. As a consequence, the Layer no longer runs within its own separate thread and so the concepts described below, though still accurate when the network is launched in multi-thread mode, do not apply completely when the new single-thread engine is used.
注意:截至Joone2.0版本为了提高多核cpu的机器性能开发了一款单线程新引擎。因此,该层不再运行在单独的线程,所以下面的概念,虽然在网络运行于多线程模式时仍然适合,并不完全适用于单线程引擎。

In order to ensure that it is possible to build any neural network architecture one requires with Joone a method to transfer the patterns through the net is required that does not depend on a central point of control.
为确保可以构建任何一个以Joone为需求的神经网络都是可行的,通过该神经网络传输模式的方式就要不依赖于一个中心点的控制。

To accomplish this goal each layer of Joone is implemented as a Runnable object. As a result each layer runs independently from every other layer while getting the input pattern, applying a transfer function to the pattern and placing the resulting pattern on the output synapses where the next layer can receive and process the pattern. This is depicted by the following basic illustration:
为了实现这个目标Joone的每一层以一个Runnable 对象实现。其结果是每一层在变为输入模式的同时独立运行于其他每个层,对该模式应用传输功能并放置模式结果到输出突触上,在这里下一层收到该模式并处理之。这可以用如下的基本图例说明:

Where for each neuron N:
XN – The weighted net input of each neuron(每个神经元的加权净输入) = (I1 * WN1) + … + (IP * WNP) + bias (WNp 表示第p个输入所占的权重,Ip表示第p个输入)bias表示阀值/偏移量
YN – The output value of each neuron(每个神经元的输出量) = f(XN)
f(X) – The transfer function (which is dependant on the layer type) –依赖于层类型的传输方程

This basic transport mechanism is also used to bring the error from the output layers to the input layers during the training phases. This allows the weights and biases to change according to the chosen learning algorithm (e.g. backprop algorithm). In other words, the layer objects alternately ‘pump' the input signal from the input synapses to the output synapses, and the error pattern from the output synapses to the input synapses.
这种基础传输机制在训练阶段也被用来取回从输出层返回输入层的误差。这使得权重和偏移量根据选择的学习来改变(如BP算法)。换句话说,层对象交替地将输入信号由输入突触泵入输出突触,而模式误差被从输出突触泵入输入突触。
This pumping action is accomplish by each layer having two opposing transport mechanisms, one from the input to the output to transfer the input pattern during the recall phase, and another from the output to the input to transfer the learning error during the training phase. This is depicted in the following figure:
此泵动作是依靠每个层拥有两个相反的传输机制来完成的,一个是阶段从输入到输出的输入模式传输动作,另一个是训练阶段从输出到输入的学习误差传输动作。如下图所示:

2

Each Joone component (both the layers and synapses) has its own pre-built mechanisms to adjust the weights and biases according to the chosen learning algorithm.
每个Joone组件(包括层和突触)有各自的预构建机制,根据所选学习校准权重和偏移量。
Complex neural network architectures can be easily built, either linear or recursive, because there is no necessity for a global controller of the net.
复杂的神经网络结构可以很容易地建立,无论是线性的或是递归的,因为没有必要关系网络全局。

Imagine each layer acting as a pump that ‘pushes' the signal (the pattern) from its input to its output, where one or more synapses connect it to the next layers, regardless of the number, the sequence, or the nature of the layers connected.
想象一下,每一层作为一个泵,从它的输入端 “推动”信号(模式)到它的输出端,在输出端一个以上的神经突触连接该层到其他层,忽略所连接的层类型、顺序、数量。

This is the main characteristic of Joone and is guaranteed by the fact that each layer runs on its own thread and represents the unique active element of any neural network based on Joone's core engine.
这是Joone的主要特性,并且有事实为证的是单个层运行在自己的线程上,代表了所有基于Joone核心引擎的神经网络的唯一的有效元素。
Look at the following figure (the arrows represent the synapses):

3

Any kind of neural network architecture can be built in this manner.

To build a neural network one simply connects each layer to another as required using synapses and the net will run without problems. Each layer (running in its own thread) will read it's input, apply a transfer function, and write the result to it's output synapses. This action can be recursively applied as many times as desired, creating many threads, while creating any neural network required.
通过使用突触简单地连接层与层就可以创建一个神经网络,并且可以无误地运行。每个层(在各自的线程中运行着)将读取它自己的输入,应用传输方程,并把处理结果输出到自己的输出突触。在创建任何要求的神经网络时,这个操作可以按需求递归应用,创建许多线程。

Joone allows any kind of net to be built thanks to its modular architecture much like a LEGO bricks system
Joone支持构建任何类型的网络,这得益于它的模块化,像极了乐高积木系统。

This is due mainly to the following characteristics: 这主要由于如下特性:

• The engine is flexible: you can build any architecture you want simply by connecting each layer to another with a synapse, without being concerned about the overall interaction of the architecture. Each layer will run independently, processing the signal on its input and writing the results to its output, where any connected synapses will transfer the signal to the next layers, ad infinitum.
• 灵活的引擎:你可以方便地通过神经突触在层间建立连接来构建任何你想要的结构,不用关心这个结构整体的交互。每个层独立运行,处理着各自输入端的信号并输出着各自的处理结果到它的输出端,在输出端任何已连接的突触将传输该信号到下一个层,循环往复。
• The engine is scalable: if you need more computational power, simply add more CPUs to the system. Each layer, running on a separate thread, will be processed by a different CPU which will enhancing the speed of the computation.
• 可扩展的引擎:如果你需要更多的运算能力,直接添加更多cpu到系统就行。每个层,运行在一个隔离的线程中,将被不同的cpu处理,这将提示运算速度。
• The engine closely mirrors reality: conceptually, the net is not far from a real system (the brain), where each neuron can work independently from any other even while part of a larger interconnected system.
• 现实镜像版的引擎:理论上,这个网与真实系统是近似的(大脑),在其中每一个神经元能独立于其他神经元工作,即使处于一个较大的交互式系统的局部区域。

All the above characteristics are valid for the single-thread engine also, introduced with version 2.0 of Joone, where the layers do not run within separate threads but instead are invoked from a unique external thread which is instantiated and handled by the NeuralNet object. The redesigning of the core engine has been carefully implemented in order to provide the same features of the multi-thread version thereby maintaining almost complete compatibility with previou releases.
以上所有特性对以2.0版本Joone介绍的单线程引擎依然有效,这其中层不能运行于隔离的线程之间但相应地被神经网对象初始化和处理过的一个唯一扩展线程所调用。重新设计的引擎核心已经过仔细实现以期提供多线程版本一致的组件,因此与以前的版本保持几乎完全兼容。

声明: 除非转自他站(如有侵权,请联系处理)外,本文采用 BY-NC-SA 协议进行授权 | 智乐兔
转载请注明:转自《神经网络引擎的传输结构
本文地址:https://www.zhiletu.com/archives-4978.html
关注公众号:智乐兔

赞赏

wechat pay微信赞赏alipay pay支付宝赞赏

上一篇
下一篇

相关文章

在线留言

你必须 登录后 才能留言!

在线客服
在线客服 X

售前: 点击这里给我发消息
售后: 点击这里给我发消息

智乐兔官微