2008 |
9 | EE | Alexander Hans,
Daniel Schneegaß,
Anton Maximilian Schäfer,
Steffen Udluft:
Safe exploration for reinforcement learning.
ESANN 2008: 143-148 |
8 | EE | Daniel Schneegaß,
Steffen Udluft,
Thomas Martinetz:
Uncertainty propagation for quality assurance in Reinforcement Learning.
IJCNN 2008: 2588-2595 |
2007 |
7 | EE | Daniel Schneegaß,
Steffen Udluft,
Thomas Martinetz:
Neural Rewards Regression for near-optimal policy identification in Markovian and partial observable environments.
ESANN 2007: 301-306 |
6 | EE | Anton Maximilian Schäfer,
Steffen Udluft,
Hans-Georg Zimmermann:
The Recurrent Control Neural Network.
ESANN 2007: 319-324 |
5 | EE | Daniel Schneegaß,
Steffen Udluft,
Thomas Martinetz:
Explicit Kernel Rewards Regression for data-efficient near-optimal policy identification.
ESANN 2007: 337-342 |
4 | EE | Daniel Schneegaß,
Steffen Udluft,
Thomas Martinetz:
Improving Optimality of Neural Rewards Regression for Data-Efficient Batch Near-Optimal Policy Identification.
ICANN (1) 2007: 109-118 |
3 | EE | Anton Maximilian Schäfer,
Daniel Schneegaß,
Volkmar Sterzing,
Steffen Udluft:
A Neural Reinforcement Learning Approach to Gas Turbine Control.
IJCNN 2007: 1691-1696 |
2006 |
2 | | Daniel Schneegaß,
Steffen Udluft,
Thomas Martinetz:
Kernel Rewards Regression: An Information Efficient Batch Policy Iteration Approach.
Artificial Intelligence and Applications 2006: 428-433 |
1 | EE | Anton Maximilian Schäfer,
Steffen Udluft,
Hans-Georg Zimmermann:
Learning Long Term Dependencies with Recurrent Neural Networks.
ICANN (1) 2006: 71-80 |