idcore-logo.jpg

Control of WECs with machine learning

Programme

Academic Collaboration

Start date

October 2017

Status

Completed

Lead contractor

Industrial Doctoral Centre for Offshore Renewable Energy

Sub-contractor(s)

Enrico Anderlini

Overview

This IDCORE project investigated the application of reinforcement learning and neural networks to the control of a point absorber. In particular, approaches with static sea states were analysed. The controllers have proven to be adaptive to changes in wave conditions and system dynamics.

The following articles were generated during this project and are published with open access.

The final thesis from this project will be made available in due course.

Control of a Point Absorber Using Reinforcement Learning

Abstract: This work presents the application of reinforcement learning for the optimal resistive control of a point absorber. The model-free Q-learning algorithm is selected in order to maximise energy absorption in each sea state. Step changes are made to the controller damping, observing the associated penalty, for excessive motions, or reward, i.e. gain in associated power. Due to the general periodicity of gravity waves, the absorbed power is averaged over a time horizon lasting several wave periods. The performance of the algorithm is assessed through the numerical simulation of a point absorber subject to motions in heave in both regular and irregular waves. The algorithm is found to converge towards the optimal controller damping in each sea state. Additionally, the model-free approach ensures the algorithm can adapt to changes to the device hydrodynamics over time and is unbiased by modelling errors.

Published in: IEEE Transactions on Sustainable Energy ( Volume: 7, Issue: 4, Oct. 2016 )

http://ieeexplore.ieee.org/document/7482650

Reactive control of a two-body point absorber using reinforcement learning

Abstract: In this article, reinforcement learning is used to obtain optimal reactive control of a two-body point absorber. In particular, the Q-learning algorithm is adopted for the maximization of the energy extraction in each sea state. The controller damping and stiffness coefficients are varied in steps, observing the associated reward, which corresponds to an increase in the absorbed power, or penalty, owing to large displacements. The generated power is averaged over a time horizon spanning several wave cycles due to the periodicity of ocean waves, discarding the transient effects at the start of each new episode. The model of a two-body point absorber is developed in order to validate the control strategy in both regular and irregular waves. In all analysed sea states, the controller learns the optimal damping and stiffness coefficients. Furthermore, the scheme is independent of internal models of the device response, which means that it can adapt to variations in the unit dynamics with time and does not suffer from modelling errors.

Published in: Ocean Engineering.Available online 24 August 2017

http://www.sciencedirect.com/science/article/pii/S0029801817304699

Reactive control of a wave energy converter using artificial neural networks

Abstract: A model-free algorithm is developed for the reactive control of a wave energy converter. Artificial neural networks are used to map the significant wave height, wave energy period, and the power take-off damping and stiffness coefficients to the mean absorbed power and maximum displacement. These values are computed during a time horizon spanning multiple wave cycles, with data being collected throughout the lifetime of the device so as to train the networks off-line every 20 time horizons. Initially, random values are selected for the controller coefficients to achieve sufficient exploration. Afterwards, a Multistart optimization is employed, which uses the neural networks within the cost function. The aim of the optimization is to maximise energy absorption, whilst limiting the displacement to prevent failures. Numerical simulations of a heaving point absorber are used to analyse the behaviour of the algorithm in regular and irregular waves. Once training has occurred, the algorithm presents a similar power absorption to state-of-the-art reactive control. Furthermore, not only does dispensing with the model of the point-absorber dynamics remove its associated inaccuracies, but it also enables the controller to adapt to variations in the machine response caused by ageing.

Published in: International Journal of Marine Energy. Volume 19, September 2017, Pages 207-220

http://www.sciencedirect.com/science/article/pii/S2214166917300668

Control of a Realistic Wave Energy Converter Model Using Least-Squares Policy Iteration

Abstract: An algorithm has been developed for the resistive control of a nonlinear model of a wave energy converter using least-squares policy iteration, which incorporates function approximation, with tabular and radial basis functions being used as features. With this method, the controller learns the optimal power take-off damping coefficient in each sea state for the maximization of the mean generated power. The performance of the algorithm is assessed against two online reinforcement learning schemes: Q-learning and SARSA. In both regular and irregular waves, least-squares policy iteration outperforms the other strategies, especially when starting from unfavorable conditions for learning. Similar performance is observed for both basis functions, with a smaller number of radial basis functions underfitting the Q-function. The shorter learning time is fundamental for a practical application on a real wave energy converter. Furthermore, this paper shows that least-squares policy iteration is able to maximize the energy absorption of a wave energy converter despite strongly nonlinear effects due to its model-free nature, which removes the influence of modeling errors. Additionally, the floater geometry has been changed during a simulation to show that reinforcement learning control is able to adapt to variations in the system dynamics.

Published in: IEEE Transactions on Sustainable Energy ( Volume: 8, Issue: 4, Oct. 2017 ) 

http://ieeexplore.ieee.org/document/7911321/