无损数据编码领域应用较少。针对这种现状,该文详细地研究了最大熵统计模型和神经网络算法各自的特点,提出了一种基于最大熵原理的神经网络概率预测模型并结合自适应算术编码来进行数据压缩,具有精简的网络结构的自适应在线学习算法。试验表明,该算法在压缩率上可以优于目前流行的压缩算法Limpel-Zip(zip,gzip),并且在运行时间和所需空间性能上同PPM和Burrows Wheeler算法相比也是颇具竞争力的。该算法实现为多输入和单输出的两层神经网络,用已编码比特的学习结果作为待编码比特的工作参数,符合数据上下文相关约束的特点,提高了预测精度,并节约了编码时间。关 键 词 算术编码; 数据压缩; 最大熵; 神经网络Lossless Data Compression with Neural Network Based on Maximum Entropy TheoryFU Yan,ZHOU Jun-lin,WU YueNeural networks are used more frequently in lossy data coding domains such as audio, image, etc than in general lossless data coding, because standard neural networks must be trained off-line and they are too slow to be practical. In this paper, an adaptive arithmetic coding algorithm based on maximum entropy and neural networks are proposed for data compression. This adaptive algorithm with simply structure can do on-line learning and does not need to be trained off-line. The experiments show that this algorithm surpasses those traditional coding method, such as Limper-Ziv compressors (zip, gzip), in compressing rate and is competitive in speed and time with those traditional coding method such as PPM and Burrows-Wheeler algorithms. The compressor is a bit-level predictive arithmetic which using a 2 layer network with muti-input and one output. The arithmetic, according with the context constriction, improves the precision of prediction and reduces the coding time.Key words arithmetic encoding; data compression; maximum entropy; neural network
猜您喜欢
推荐内容
开源项目推荐 更多
热门活动
热门器件
用户搜过
随便看看
热门下载
热门文章
评论