| 2007 |
| 9 | EE | Laëtitia Matignon,
Guillaume J. Laurent,
Nadine Le Fort-Piat:
Hysteretic q-learning : an algorithm for decentralized reinforcement learning in cooperative multi-agent teams.
IROS 2007: 64-69 |
| 2006 |
| 8 | EE | Laëtitia Matignon,
Guillaume J. Laurent,
Nadine Le Fort-Piat:
Reward Function and Initial Values: Better Choices for Accelerated Goal-Directed Reinforcement Learning.
ICANN (1) 2006: 840-849 |
| 7 | EE | Laëtitia Matignon,
Guillaume J. Laurent,
Nadine Le Fort-Piat:
Improving Reinforcement Learning Speed for Robot Control.
IROS 2006: 3172-3177 |
| 2005 |
| 6 | | Cédric Adda,
Guillaume J. Laurent,
Nadine Le Fort-Piat:
Learning to control a real micropositioning system in the STM-Q framework.
ICRA 2005: 4569-4574 |
| 2004 |
| 5 | EE | Lounis Adouane,
Nadine Le Fort-Piat:
Hybrid Behavioral Control Architecture for the Cooperation of Minimalist Mobile Robots.
ICRA 2004: 3735-3740 |
| 2000 |
| 4 | EE | Alain Lambert,
Nadine Le Fort-Piat:
Safe Task Planning Integrating Uncertainties and Local Maps Federations.
I. J. Robotic Res. 19(6): 597-611 (2000) |
| 1999 |
| 3 | | Alain Lambert,
Nadine Le Fort-Piat:
Safe Actions and Observations Planning for Mobile Robots.
ICRA 1999: 1341-1346 |
| 1997 |
| 2 | EE | Michèle Rombaut,
Nadine Le Fort-Piat:
Driving Activity: How to Improve Knowledge of the Environment.
Journal of Intelligent and Robotic Systems 18(4): 399-408 (1997) |
| 1 | EE | Nadine Le Fort-Piat,
I. Collin,
Dominique Meizel:
Planning robust displacement missions by means of robot-tasks and local maps.
Robotics and Autonomous Systems 20(1): 99-114 (1997) |