Advanced Controls and Sensors Group

Based at the UTA Research Institute, the Advanced Control and Sensors Group conducts research in nonlinear feedback control systems, intelligent control, reinforcement learning for optimal control, synchronization of multi-agent networked systems, decision for intelligent driverless cars, distributed control on communication networks, neuro-psychology for feedback control, robotics, machine learning in automatic feedback systems, small autonomous rotorcraft vehicles and neural networks. The group is supported by grants from the National Science Foundation, Office of Naval Research and Army Research Office and industry contracts.

Recent research areas include:

  • Cooperative control of distributed systems on communication graphs
  • Synchronized control for interactive autonomous aerial systems
  • Reinforcement learning & approximate dynamic programming
  • Human-robot interactions, robotics
  • Nonlinear control systems
  • Robust and adaptive systems and control
  • Neural network control of robots and nonlinear systems
  • Machine learning for feedback control

Personnel

Frank L. Lewis, Ph.D., Fellow, National Academy of Inventors, Lab Director (lewis@uta.edu)
Yan Wan, Ph.D. (yan.wan@uta.edu)

Students

  • Yusuf Kartal (Ph.D.), cooperative control of autonomous multi-agent air vehicles
  • Patrik Kolaric (Ph.D.), decision and control for multi-agent autonomous systems
  • Bosen Lian (Ph.D.), interactive multi-agent sensor networks and autonomous systems

Alumni

  • Victor Lopez, reinforcement learning and cooperative control
  • Baker Al Quadi, bio-inspired adaptive tuning of human-robot interfaces
  • Shan Zuo Susan (co-advised with A. Davoudi), cooperative control of multi-agent systems
  • Bahare Kiumarsi, reinforcement learning and biologically inspired control
  • Hamidreza Modares, reinforcement learning for feedback control
  • Vahidreza Nasirian, Revisiting established control paradigms in emerging energy hubs (Co-advised. Main adviser is Dr. Ali Davoudi, EE, UTA)
  • Ali Bidram, cooperative control for electric power microgrid (Co-advised. Main adviser is Dr. Ali Davoudi, EE, UTA)
  • Kristian Hengster-Movric, cooperative control systems, distributed optimal design on graphs
  • Muhammad Aurangzeb, Coalitions and Games on Graphs
  • Mohammed Abouheaf, cooperative graphical games, reinforcement learning for power system economic dispatch
  • Kyriakos Vamvoudakis, neural networks for feedback control
  • Emanuel Stingu, autonomous aerial vehicles, helicopters
  • Draguna Vrabie, Neural Networks for Control- Approximate Dynamic Programming for continuous-time systems
  • Abhijit Das, nonlinear autopilots for UAV helicopters
  • Prasanna Ballal, Wireless Sensor Networks
  • Pritpal Dang, Man-Machine Interface
  • J. Gadewadikar, H-infinity control, output feedback control, helicopter UAV control
  • Asma Al-Tamimi, Approximate Dynamic Programming for discrete-time systems
  • Cheng Tao, finite horizon optimal control for nonlinear systems
  • M. Abu-Khalaf, Nonlinear Control and HJB Equation Design

Recent Funding

  • F.L. Lewis, Yan Wan, and Ali Davoudi,” Graphical Games and Distributed Reinforcement Learning Control in Human-networked Multi-group Societies,” ARO Grant, $750,000, Sept 2020-Sept 2023.
  • F.L. Lewis, Yan Wan, and Ali Davoudi, EAGER: Real-Time: Collaborative Research: Unified Theory of Model-based and Data-driven Real-time Optimization and Control for Uncertain Networked Systems, NSF grant, $220,000, September 2018-August 2020.   
  • F.L. Lewis and Yan Wan, “Optimal Design for Assured Performance of Interactive Multibody Systems,” ONR Grant, $815,000, June 2018-May 2022.   

Industry Contracts

  • F.L. Lewis and Yan Wan, “Fast Autonomous Driving Decision based on Learning and Rule-based Cognitive Information,” Ford Contract for 3 years, $150K. April 2019-April 2022 
  • F.L. Lewis and Yan Wan, “Heterogeneous Autonomous Networks for Sensor Optimizing Locomotion,” $50,000 contract from Lockheed Martin Advanced Technology Labs, Feb.-Dec. 2019. 

Patents

  • 7,548,011 B. Borovic, F.L. Lewis, A.Q. Liu, and D. Popa, "Systems and Methods for Improved Control of Micro-Electrical-Mechanical System (MEMS) Electrostatic Actuator" June 16, 2009
  • 7,080,055 Campos, J.; Lewis, F. L. Method for Backlash Compensation Using Discrete-Time Neural Networks. July, 2006.
  • 6,611,823 R. Selmic, F.L. Lewis, A.J. Calise, and M.B. McFarland, "Backlash Compensation Using Neural Network" August 26, 2003.
  • 6,185,469 F.L. Lewis, D. Tacconi, A. Gurel, O. Pastravanu, Method and Apparatus for Testing and Controlling a Flexible Manufacturing System industrial process resource assignment
  • 6,064,997 Jagannathan, S.; Lewis, F. L. Discrete-time tuning of neural network controllers for nonlinear dynamical systems. May 16, 2000.
  • 5,943,660 Yesildirek, A.; Lewis, F. L. Method for feedback linearization of neural networks and neural network incorporating same. August 24, 1999.

Main Books

  • F.L. Lewis, Hongwei Zhang, K. Hengster-Movric, A. Das, Cooperative Control of Multi-Agent Systems: Optimal and Adaptive Design Approaches, Springer-Verlag, Berlin, 2014.
  • D. Vrabie, K. Vamvoudakis, and F.L. Lewis, Optimal Adaptive Control and Differential Games by Reinforcement Learning Principles, IET Press, 2012.
  • F.L. Lewis, S. Jagannathan, and A. Yesildirek, Neural Network Control of Robot Manipulators and Nonlinear Systems, Taylor and Francis, London, 1999.
  • F.L. Lewis, D.M. Dawson, and C.T. Abdallah, Robot Manipulator Control: Theory and Practice, 2nd edition, Revised and Expanded, CRC Press, Boca Raton, 2006.
  • F.L. Lewis, Applied Optimal Control and Estimation:Digital Design and Implementation, Prentice-Hall, New Jersey, TI Series, Feb. 1992.
  • B.L. Stevens and F.L. Lewis, Aircraft Control and Simulation, John Wiley and Sons, New York, Feb. 1992. Second edition, 2003.
  • F.L. Lewis, D. Vrabie, and V. Syrmos, Optimal Control, third edition, John Wiley and Sons, New York, 2012.

Recent Journal Publications

  • Xue, Wenqian, Fan, J., Lopez, V. G., Li, J., Jiang, Y., Chai, T., Lewis, F. L. (2020). New Methods for Optimal Operational Control of Industrial Processes using Reinforcement Learning on Two Time-Scales. IEEE Transactions on Industrial Informatics, 16(5), 3085--3099.
  • Liu, H., Ma, T., Lewis, F. L., Wan, Y. (2020). Robust Formation Control for Multiple Quadrotors With Nonlinearities and Disturbances. IEEE Transactions on Cybernetics, 50(4), 1362--1371.
  • Modares, H., Kiumarsi, B., Lewis, F. L., Ferrese, F., Davoudi, A. (2020). Resilient and Robust Synchronization of Multi-agent Systems Under Attacks on Sensors and Actuators. IEEE Transactions on Cybernetics, 50(3), 1240--1250.
  • Valadbeigi, A. P., Sedigh, A. K., Lewis, F. L. (2020). H-infinity Static Output-Feedback Control Design for Discrete-Time Systems Using Reinforcement Learning. IEEE Transactions on Neural Networks and Learning Systems, 31(2), 396--406.
  • Cremer, S., Das, S. K., Wijayasinghe, I. B., Popa, D. O., Lewis, F. L. (2020). Model-Free Online Neuroadaptive Controller With Intent Estimation for Physical Human–Robot Interaction. IEEE Transactions on Robotics, 36(1), 240--253.
  • Kartal, Y., Kolaric, P., Lopez, V. G., Dogan, A., Lewis, F. L. (2020). Backstepping approach for design of PID controller with guaranteed performance for micro-air UAV. Control Theory and Technology, 18(1), 4-18.
  • Liu, D., Liu, H., Lewis, F. L., Wan, Y. (2020). Robust Fault-Tolerant Formation Control for Tail-Sitters in Aggressive Flight Mode Transitions. IEEE Transactions on Industrial Informatics, 16(1), 299--308.
  • Ye, M., Hu, G., Lewis, F. L., Xie, L. (2019). A Unified Strategy for Solution Seeking in Graphical N-coalition Noncooperative Games. IEEE Transactions on Automatic Control, 64(11), 4645--4652.
  • Chen, C., Modares, H., Xie, K., Lewis, F. L., Wan, Y., Xie, S. (2019). Reinforcement Learning-based Adaptive Optimal Exponential Tracking Control of Linear Systems with Unknown Dynamics. IEEE Transactions on Automatic Control, 64(11), 4423--4438.
  • Kiumarsi, B., AlQaudi, B., Modares, H., Lewis, F. L., Levine, D. S. (2019). Optimal Control Using Adaptive Resonance Theory and Q-Learning. Neurocomputing, 361, 119--125.
  • Chen, C., Xie, K., Lewis, F. L., Xie, S., Davoudi, A. (2019). Fully Distributed Resilience for Adaptive Exponential Synchronization of Heterogenous Multi-Agent Systems Against Actuator Faults. IEEE Transactions on Automatic Control, 64(8), 3347--3354.
  • Lopez, V. G., Lewis, F. L. (2019). Dynamic Multiobjective Control for Continuous-time Systems using Reinforcement Learning. IEEE Transactions on Automatic Control, 64(7), 2869-2874.
  • Zuo, S., Song, Y. D., Lewis, F. L., Davoudi, A. (2019). Time-Varying Output Formation-Containment of General Linear Homogeneous and Heterogeneous Multi-Agent Systems. IEEE Transactions on Control of Network Systems, 6(2), 537--548.
  • Xie, K., Chen, C., Lewis, F. L., Xie, S. (2019). Adaptive Compensation for Nonlinear Time-varying Multi-Agent Systems with Actuator Failures and Unknown Control Directions. IEEE Transactions on Cybernetics, 49(5), 1780--1790.
  • Li, J., Chai, T., Lewis, F. L., Ding, J., Jiang, Y. (2019). Off-policy interleaved Q-learning: optimal control for affine nonlinear discrete-time systems. IEEE Transactions on Neural Networks and Learning Systems, 30(5), 1308--1320.