个人信息
参与实验室科研项目
人机协同柔性智造关键技术与集成验证
人机智能协同关键技术及其在智能制造中的应用
非可信智能驱动的可靠智造
研究课题
基于通道独立策略的多变量时序预测方法研究
学术成果
共撰写/参与撰写专利 1 项,录用/发表论文 2 篇,投出待录用论文2篇。
patent
-
一种基于故障树和相关性分析的动态测项良率计算方法
赵云波,
马树森,
王康成,
康宇,
and 柏鹏
[Abs]
本发明涉及智能制造技术领域,公开了一种 基于故障树和相关性分析的动态测项良率计算 方法,包括:建立主板测项故障树,基于建立的测 项故障树构造测项的相关性矩阵,并利用该相关 性矩阵初步筛选出一个或多个与动态测项相关 性强的必测项;比较测项间的相关系数与人为设 定的阈值之间的大小,从而筛选出与动态测项关 联性最强的必测项;用必测项的良率作为动态测 项的良 率;本发明提供一种更加科学、合理的方 法,从而达到优化主板功能测试策略的目的,提 高测试效率。
Journal Articles
-
Multivariate Time-Series Modeling and Forecasting With Parallelized Convolution and Decomposed Sparse-Transformer
Shusen Ma,
Yun-Bo Zhao ,
Yu Kang,
and Peng Bai
IEEE Trans. Artif. Intell.
2024
[Abs]
[doi]
[pdf]
Many real-world scenarios require accurate predictions of time series, especially in the case of long sequence timeseries forecasting (LSTF), such as predicting traffic flow and electricity consumption. However, existing time-series prediction models encounter certain limitations. First, they struggle with mapping the multidimensional information present in each time step to high dimensions, resulting in information coupling and increased prediction difficulty. Second, these models fail to effectively decompose the intertwined temporal patterns within the time series, which hinders their ability to learn more predictable features. To overcome these challenges, we propose a novel endto-end LSTF model with parallelized convolution and decomposed sparse-Transformer (PCDformer). PCDformer achieves the decoupling of input sequences by parallelizing the convolutional layers, enabling the simultaneous processing of different variables within the input sequence. To decompose distinct temporal patterns, PCDformer incorporates a temporal decomposition module within the encoder–decoder structure, effectively separating the input sequence into predictable seasonal and trend components. Additionally, to capture the correlation between variables and mitigate the impact of irrelevant information, PCDformer utilizes a sparse self-attention mechanism. Extensive experimentation conducted on five diverse datasets demonstrates the superior performance of PCDformer in LSTF tasks compared to existing approaches, particularly outperforming encoder–decoderbased models.
-
博客文章