Welcome to Francis Academic Press

Academic Journal of Engineering and Technology Science, 2023, 6(7); doi: 10.25236/AJETS.2023.060704.

Wasserstein-Gradient Flow Based Sample Replenishment Method for Power Flow Datasets

Author(s)

Meng Xianbo, Li Yalou, Wang Zigan, Hu Shanhua

Corresponding Author:
Meng Xianbo
Affiliation(s)

China Electric Power Research Institute, Haidian District, Beijing, China

Abstract

The application of artificial intelligence (AI) methods to grid analysis has been extensively studied. The distribution characteristics of the power flow dataset required for the training of AI methods will affect the performance of AI models. The power flow data accumulated for offline analysis are manually adjusted limit operation mode and distributed at the grid operation boundary, so the power flow dataset for offline analysis has good distribution characteristics. However, its small number and low manual generation efficiency make it difficult to exploit the advantages of this distributed characteristic dataset. In this paper, a power flow dataset sample supplementation method based on Wasserstein-gradient flow is proposed to realize the adjustment of the power flow dataset considering the distribution characteristics by solving the dynamic process of the dataset for Wasserstein-gradient flow. It is also tested on the CEPRI-36 node grid power flow dataset, and the generated supplemental data all have similar distribution characteristics with the target dataset, which verifies the effectiveness of the method.

Keywords

power flow dataset; optimal transport; wasserstein-gradient flow

Cite This Paper

Meng Xianbo, Li Yalou, Wang Zigan, Hu Shanhua. Wasserstein-Gradient Flow Based Sample Replenishment Method for Power Flow Datasets. Academic Journal of Engineering and Technology Science (2023) Vol. 6, Issue 7: 18-23. https://doi.org/10.25236/AJETS.2023.060704.

References

[1] Ambrosio L, Gigli N, Savare G (2005). Gradient flows in metric spaces and in the Wasserstein space of probability measures [Z]. Lectures in Mathematics, ETH Zurich, Birkhäuser.

[2] Santambrogio F (2017). {Euclidean, metric, and Wasserstein} gradient flows: an overview [J]. Bulletin of Mathematical Sciences, 7(1): 87-154.

[3] Jordan R, Kinderlehrer D, Otto F (1998). The variational formulation of the Fokker–Planck equation [J]. SIAM journal on mathematical analysis, 29(1): 1-17.

[4] Villani C (2009). Optimal transport: old and new: Vol. 338 [M]. Springer.

[5] Cuturi M (2013). Sinkhorn distances: Lightspeed computation of optimal transport[C]. Advances in neural information processing systems: Vol. 26. Curran Associates, Inc.

[6] Benamou J D, Brenier Y (2000). A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem [J]. Numerische Mathematik, 84(3): 375-393.

[7] Alvarez-Melis D, Fusi N (2020). Geometric Dataset Distances via Optimal Transport [J]. arXiv: 2002. 02923 [cs, stat], 2020.

[8] Wilson A C, Recht B, Jordan M I (2016). A Lyapunov analysis of momentum methods in optimization [J]. arXiv preprint arXiv:1611.02635.

[9] Alvarez-Melis D, Fusi N (2021). Dataset dynamics via gradient flows in probability space[C]. International conference on machine learning. 219-230.

[10] Feydy J, Sejourne T, Vlalard F X, et al (2019). Interpolating between optimal transport and mmd using sinkhorn divergences [C]. The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2681-2690.

[11] Flamary R, Courty N, Gramfort A, et al (2021). POT: Python Optimal Transport [J]. Journal of Machine Learning Research, 22(78): 1-8.