Technical Lead: F.L. Lewis, Ph.D.
SEE RECENT PRESENTATIONS BELOW
Cooperative Control of Distributed Systems on Graphs
Cooperative control of Renewable Energy Microgrids
Reinforcement Learning & Approximate Dynamic Programming
See recent presentations below
Intelligent Nonlinear Control
Optimal control for nonlinear systems
Discrete Event Supervisory Control
F.L. Lewis, D. Vrabie, and V. Syrmos, Optimal Control, third edition, John Wiley and Sons, New York, 2012.
F.L. Lewis, L. Xie, and D. Popa, Optimal & Robust Estimation: With an Introduction to Stochastic Control Theory, CRC Press, Boca Raton, 2007. Second Edition.
B.L. Stevens, F.L. Lewis, E.N. Johnson, Aircraft Control and Simulation: Dynamics,
Control, and Autonomous Systems, John Wiley and Sons, New York, Feb. 1992.
Third edition 2015.
D. Vrabie, K. Vamvoudakis, and F.L. Lewis, Optimal Adaptive Control and Differential Games by Reinforcement Learning Principles, IET Press, 2012.
F.L. Lewis and Derong Liu, editors, Reinforcement Learning and Approximate Dynamic Programming for Feedback Control, John Wiley/IEEE Press, Computational Intelligence Series. 2012.
F.L. Lewis, H. Zhang, K. Hengster-Movric, A. Das, Cooperative Control of Multi-Agent Systems: Optimal and Adaptive Design Approaches, Springer-Verlag, 2014.
F.L. Lewis, S. Jagannathan, and A. Yesildirek,
Neural Network Control of Robot
Manipulators and Nonlinear Systems, Taylor and Francis, London, 1999.
download pdf file
F.L. Lewis, Applied Optimal Control and Estimation: Digital Design and Implementation, Prentice-Hall, New Jersey, TI Series, Feb. 1992.
F.L. Lewis, D.M. Dawson, and C.T. Abdallah, Robot
Manipulator Control: Theory and Practice, 2nd edition, Revised and Expanded, CRC Press, Boca Raton, 2006.
download pdf file
Y. Kim and F.L. Lewis, High-Level Feedback Control with Neural Networks," World Scientific, Singapore, 1998.
G. Vachtsevanos, F.L. Lewis, M. Roemer, A. Hess, B. Wu, Intelligent Fault Diagnosis and Prognosis for Engineering Systems, John Wiley, New York, 2006.
PRESENTATIONS IN MAIN RESEARCH AREAS
PRESENTATIONS IN MAIN RESEARCH AREAS
OLDER INVITED PRESENTATIONS
Various invited talks 2015, “Integral Reinforcement Learning for Real-time Optimal Control and Differential Multi-player Games”
Keynote Speaker, Int. Symposium on Resilient Control Systems, Philadelphia, August 2015, “Reinforcement Learning for Resilient Control in Cooperative and Adversarial Multi-agent Networks: CPS Applications in Microgrid and Human-Robot Interactions”
Invited Talk, Carnegie Mellon Pacific Campus, NASA Ames, Cal, April 2015, “Cooperative Control for Renewable Energy Microgrids”
Opening Invited Speaker, Workshop on Robotics and Biotechnology, Hong Kong City University, 16 Jan. 2015, “Reinforcement learning for human-robot interaction”
Invited Talk, Northeastern University, Shenyang,, China, Jan. 2015, “Data-driven Optimization and Supervisory Control for Industrial Processes”
Data-driven Control and Optimization for Industrial Processes, Workshop at Northeastern University, Shenyang, China, May 2014. Qian Ren and Project 111 Program.
Reinforcement Learning and ADP for Real-Time Optimal Control and Dynamic Games, Plenary Talk, Int. Joint Conference on Neural Networks, Dallas, August 2013
Data-driven Control and Optimization for Industrial Processes: Reinforcement Learning & Supervisory Control, Workshop at Northeastern University, Shenyang, China, July 2013. Project 111 Program.
Optimal Distributed Cooperative Control of Multi-Agent Systems and Graphical Games, Plenary Talk, Int. Conf. Intelligent Control and Information Processing ICICIP, Beijing, June 2013.
Distributed Cooperative Control for Electric Power Microgrid Applications, Plenary Talk, IEEE CYBER, Nanjing, May 2013.
Reinforcement Learning Adaptive Structures for Real-Time Optimal Control and Graphical Games, Invited Talk, Chinese University of Hong Kong, May 2013.
Adaptive Tuning for Optimal Process Control and Multi-Process Games Using Reinforcement Learning, Singapore Institute of Manufacturing Technology SIMTech, May 2013.
Optimal Adaptive Control Using Reinforcement Learning, Opening Plenary Talk, IEEE Multi-Conference on Systems and Controls, Dubrovnik, Croatia, Oct. 2012.
Novel Adaptive Control Structures by Reinforcement Learning, Opening Plenary Talk, Int. Conf. on System Theory, Control and Computing, Sinaia, Romania, Oct. 2012.
Reinforcement Methods for Online Learning in Autonomous Robotic Systems, Plenary Talk, FIRA Robo World Congress, Bristol, UK, 20 August 2012.
Cooperative Control: Stability versus global optimality, Chinese Academy of Sciences, 2012
Cooperative control: optimal design and Graphical Games, Chinese Academy of Sciences, 2012
Cooperative Control: Optimal Design, Observers, distributed adaptive control, Chinese Academy of Sciences, 2011
CDC Orlando 2011 workshop
“Optimal Adaptive Control: Online Solutions for Optimal Feedback Control and Differential Games Using Reinforcement Learning”
Lewis notes- MDP and reinforcement learning
Lewis notes- online synchronous policy iteration
Optimal Control and Online Game Solutions Using Approximate Dynamic Programming, Workshop, Symp. ADP/RL, Paris, April 2011.
“Online Optimal Adaptive Control: Real-Time learning of optimal control and zero-sum game solutions,” Plenary Talk, Chinese Conf. Decision & Control, Xuzhou, May 2010.
"Distributed Adaptive Control for Synchronization of Unknown Nonlinear Networked Systems,” Invited Talk, 9th Symposium on Frontier Problems in System and Control, Chinese Academy of Sciences, Beijing, May 2010
Structural Health Monitoring for Aircraft Skin Systems, A-Star Data Storage Institute DSI, Singapore, August 2009.
INFORMATION FOR NEW STUDENTS APPLYING – Apply through Graduate Adviser, Dept. of Electrical Engineering
Recent Former Ph.D. Students:
Research Supported by (Past and Present):
National Science Foundation
Office of Naval Research
Army Research Office, Army National Automotive Center, TARDEC/RDECOM
Air Force Office of Scientific Research
ONR, NASA, and ARO SBIR Contracts
F.L. Lewis Professional Details-
Grants and Contracts
A map to UTARI is on the UTARI website under ‘Contact Us’