论文部分内容阅读
直觉告诉我们:当人工神经网络算法在多处理器系统上并行实现时,处理器网络的拓扑结构和处理器节点的扇入尺寸(即输入输出规模)会影响并行算法的效率,但是对全连接和随机连接神经网络,上述结论并不成立。在神经网络的并行实现中,处理器的通信开销是一个主要的限制因素,本文将对全连接和随机连接神经网络并行实现的几个相关问题进行讨论。1 学习时间的分解
Intuition tells us that when the artificial neural network algorithm is implemented in parallel on a multi-processor system, the topological structure of the processor network and the fan-in size (ie, input-output size) of the processor node affect the efficiency of the parallel algorithm. However, And stochastic connection neural network, the above conclusion does not hold. In the parallel implementation of neural networks, the communication overhead of the processor is a major limiting factor. In this paper, several related problems of parallel connection of fully connected and stochastic connected neural networks are discussed. 1 decomposition of learning time