个人信息
参与实验室科研项目
人机智能协同关键技术及其在智能制造中的应用
非可信智能驱动的可靠智造
学术成果
共撰写/参与撰写专利 1 项,录用/发表论文 2 篇,投出待录用论文0篇。
patent
-
一种基于故障树和相关性分析的动态测项良率计算方法
赵云波,
马树森,
王康成,
康宇,
and 柏鹏
[Abs]
本发明涉及智能制造技术领域,公开了一种 基于故障树和相关性分析的动态测项良率计算 方法,包括:建立主板测项故障树,基于建立的测 项故障树构造测项的相关性矩阵,并利用该相关 性矩阵初步筛选出一个或多个与动态测项相关 性强的必测项;比较测项间的相关系数与人为设 定的阈值之间的大小,从而筛选出与动态测项关 联性最强的必测项;用必测项的良率作为动态测 项的良 率;本发明提供一种更加科学、合理的方 法,从而达到优化主板功能测试策略的目的,提 高测试效率。
Journal Articles
-
Multivariate Time Series Modeling and Forecasting with Parallelized Convolution and Decomposed Sparse-Transformer
Shusen Ma,
Yun-Bo Zhao ,
Yu Kang,
and Peng Bai
IEEE Trans. Artif. Intell.
2024
[Abs]
[doi]
[pdf]
Many real-world scenarios require accurate predictions of time series, especially in the case of long sequence timeseries forecasting (LSTF), such as predicting traffic flow and electricity consumption. However, existing time series prediction models encounter certain limitations. Firstly, they struggle with mapping the multidimensional information present in each time step to high dimensions, resulting in information coupling and increased prediction difficulty. Secondly, these models fail to effectively decompose the intertwined temporal patterns within the time series, which hinders their ability to learn more predictable features. To overcome these challenges, we propose a novel endto-end LSTF model with parallelized convolution and decomposed sparse-Transformer (PCDformer). PCDformer achieves the decoupling of input sequences by parallelizing the convolutional layers, enabling the simultaneous processing of different variables within the input sequence. To decompose distinct temporal patterns, PCDformer incorporates a temporal decomposition module within the encoder-decoder structure, effectively separating the input sequence into predictable seasonal and trend components. Additionally, to capture the correlation between variables and mitigate the impact of irrelevant information, PCDformer utilizes a sparse self-attention mechanism. Extensive experimentation conducted on five diverse datasets demonstrates the superior performance of PCDformer in LSTF tasks compared to existing approaches, particularly outperforming encoder-decoder-based models.
-
TCLN: A Transformer-Based Conv-LSTM Network for Multivariate Time Series Forecasting
Shusen Ma,
Tianhao Zhang,
Yun-Bo Zhao ,
Yu Kang,
and Peng Bai
Appl Intell
2023
[Abs]
[doi]
[pdf]
The study of multivariate time series forecasting (MTSF) problems has high a significance in many areas, such as industrial forecasting and traffic flow forecastm ing. Traditional forecasting models pay more attention to the temporal features of variables and lack depth in extracting spatial and spatiotemporal features between variables. In this paper, a novel model based on the Transformer, convolutional neural network (CNN), and long short-term memory (LSTM) network is proposed to address the issues. The model first extracts the spatial feature vectors through the proposed Multi-kernel CNN. Then it fully extracts the temporal d information by the Encoder layer that consists of the Transformer encoder layer and the LSTM network, which can also obtain the potential spatiotemporal correlation. To extract more feature information, we stack multiple Encoder layers. e Finally, the output is decoded by the Decoder layer composed of the ReLU activat tion function and the Linear layer. To further improve the model’s robustness, we also integrate an autoregressive model. In model evaluation, the proposed model p achieves significant performance improvements over the current benchmark methods for MTSF tasks on four datasets. Further experiments demonstrate that the e model can be used for long-horizon forecasting and achieve satisfactory results on the yield forecasting of test items (our private dataset, TIOB).