2013 — 2016 |
Zhang, Xiaoli |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Collaborative Research: 3d Gaze Control For Assistive Robots
PI: Zhang, Xiaoli and Nelson, Carl A. Proposal Number: 1264496 & 1264504
The objective of this research is to develop a novel 3D gaze control system to intuitively connect elderly or disabled people with robotic assistants. The communication will take place through a 3D gaze as the robot control signal. We will focus on basic Activities of Daily Living, including retrieving an object to the user, and moving an object from one place to another, which enable the individual to live independently. Tasks include: (1) a study of 3D gaze estimation based on eye modeling and simulation with experimental data; (2) investigation and development of a 3D gaze control framework to enable the target selection relative to real-world, everyday objects on which assistive tasks are performed; (3) establishment of a gaze control platform for testing and evaluating the proposed 3D gaze control model with a wide range of assistive robot applications.
Intellectual Merit: The project investigates a novel 3D gaze control concept for robotic assistants with the goal of increasing the level of independence for motor impaired people due to diseases or senescence. The research also proposes a novel gaze control model which uses gaze-extracted features to guide a robot for object identification and operation, achieving simple and natural human-robot interaction. Finally, the project seeks to develop a novel 3D gaze estimation system to ensure accuracy and reliability, which is currently not well explored. This project seeks to provide solutions for a wide spectrum of robotic assistive applications.
Broader Impacts: By introducing 3D gaze as the control signal into communication between elderly or disabled people and robotic assistants, the broader impacts of this project are to build a simple and natural control interface to motor impaired people and increase the level of living independence. Persons with disabilities, especially students with disabilities at Wilkes University, will be actively involved in the development of the proposed work including feedback in the form of interviews, surveys, and participation in testing and evaluating the working system. Results, outcomes, software tools, benchmarks, and educational materials will be disseminated through a project web site, as well as through journal and conference publications. A new course on assistive robotic technology is being developed and taught at Wilkes University.
|
0.961 |
2017 — 2022 |
Zhang, Xiaoli |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Career: Goal-Guided Self-Reflective Control Interface in Teleoperation @ Colorado School of Mines
The focus of this project is on human-robot interactions (HRI) during teleoperation, for both the fundamental research and the educational activities. The research derives from the observation that object grasping and manipulation using conventional teleoperation approaches places a significant control burden on the human operator and reduces task performance. This is because indirect manipulation of an object through a robot hand may cause inaccurate or undesired robot motion, while the limited control inputs available to the operator (e.g., joystick, data glove) make direct kinematic mapping challenging for complex object manipulation tasks. So the human operator must mentally and physically transform (e.g., rotate, translate, scale, deform) the desired robot actions to required inputs at the interface; these transformations significantly increase control difficulty. The primary research goal of this project is to develop a novel goal-guided self-reflective control interface (GSRCI), which will enable the robot to understand the operator's high-level objective during an object-grasping operation and to conform to task constraints in order to reduce control difficulties and ensure the success of subsequent manipulation. The primary educational activity derives from the common deficiency of distance learning programs supported by existing teleconferencing technologies, namely that they offer limited or no opportunities for hands-on learning. To address this problem, an interactive distance learning system (IDLS) will be developed that immerses remote students in the classroom environment through student tele-controlling of a robot's arms and hands for object manipulation and/or interaction with other classmates, thereby enabling remote users to feel present in the classroom and engaged in class activities. The task modeling technology as well as the goal-guided control interface technology from the research work will be coupled to develop an easy and intuitive teleoperation-based distance learning system for K-12 students. The GSRCI represents transformative technology with the potential to provide a new paradigm of HRI that will significantly improve the power and quality of teleoperation and broadly impact applications related to diverse domains including assistance for the elderly and disabled, minimally invasive surgery, space and underwater explorations, military reconnaissance, nuclear servicing, and urban search and rescue. The IDLS will foster engagement for remote students and allow them to successfully participate in STEM activities, offering disadvantaged groups the potential to learn regardless of their ability to physically attend a class setting.
To achieve these goals, a cognitive interface will be created that enables a robot to flexibly reproduce actions that accommodate the operator's motion inputs as well as autonomously regulate these actions in a self-reflective manner to compensate task constraints that facilitate subsequent manipulations. A novel goal-achievement indicator will predict the level of goal accomplishment for the planned action. Additionally, a goal-guided remedial planner will regulate this action to accomplish the goal by relaxing the constraint bound of the operator's motion inputs using an adaptive local search strategy. New models for human and robot task-based grasp behaviors will also be developed, which will provide a knowledge base to infer human goals in a task and to conduct goal-guided robot-grasp planning. To link quantified task constraints with symbolic tasks, account for uncertainty in task modeling, and allow task reasoning from partially observed data, a directed probabilistic Bayesian model will encode the statistical dependence among the task goal, object attributes, actions, and task constraints.
|
0.907 |
2018 — 2020 |
Petr, Vilem Zhang, Xiaoli Yu, Zhenzhen Liu, Stephen [⬀] Madeni, Juan Carlos |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Phase Ii Iucrc At Colorado School of Mines: Center For Manufacturing & Materials Joining Innovation Center (Ma2jic) @ Colorado School of Mines
The Manufacturing and Materials Joining Innovation Center (Ma2JIC) addresses the question of infrastructure rebuilding and innovative materials application in modern manufacturing which are of national interest. The Ma2JIC Center establishes an environment between universities and industrial partners that promotes the development and application of fundamental knowledge in the areas of materials joining and additive manufacturing and provides a platform for the education of the next generation of scientists and engineers. As one of the sites since its inception, the Colorado School of Mines (CSM) Site has complemented the Center with its unique expertise and laboratory facilities in the design and manufacturing of innovative welding materials, welding and joining process innovation, welding materials characterization, and additive manufacturing for the energy, aerospace, oil and gas, and manufacturing industries. The CSM Site will seek to intensify its collaborations with the other Center Sites to jointly develop collaborations with the welding, joining and manufacturing industries, large and small, national laboratories and government agencies. The strength of the Center is the ability of the group to close the gap between materials development and weldability for materials used in the manufacturing industry, with ultimate societal gain from all these amazing synergies.
The Ma2JIC Center research thrusts of Process Innovation, Materials Performance, Additive Manufacturing, Weldability Testing and Evaluation will develop fundamental understanding as described by the materials tetrahedron - "Processing-Microstructure-Properties-Performance". CSM Site researchers are examining the fundamental aspects of GTA and Laser weldability of Laser-Powder bed fusion AM built products, solidification behavior of Laser-Powder bed fusion and Directed Energy Deposition AM products, and filler metal development for microstructural optimization in Electron Beam Freeform Fabrication products, including metal matrix composites. The incorporation of Reaction Synthesis powder into tubular wires for both laser-based and electron beam-based additive manufacturing is being attempted. The effect of surface tension on AM deposits will be fully characterized and understood for process control. Another research focus addresses the design and manufacture of thermal spray coatings with enhanced wear and fracture resistance for oil and gas drilling operations. The CSM Site will continue to study and design low transformation temperature welding (LTTW) consumables for distortion control and residual stress mitigation control. These programs are expected to lead to the development of fundamental knowledge as well as practical technology. The CSM Site is also pursuing cross university and multi-industry collaboration on intelligent welding, friction stir processing, robotics, advanced neutron and X-rays characterization, and data-driven modeling.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.907 |
2019 — 2020 |
Seetharaman, Sridhar Spiegel, Samuel Zhang, Xiaoli Duzgun, Sebnem Brice, Craig |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Convergence Accelerator Phase I (Raise): Ai-Enabled Personalized Training For Displaced Workers in Materials Supply Chain @ Colorado School of Mines
The NSF Convergence Accelerator supports team-based, multidisciplinary efforts that address challenges of national importance and show potential for deliverables in the near future.
The broader impact of this Convergence Accelerator Phase I project is to create artificial intelligence (AI) -enabled tools aimed at curbing the looming workforce displacement crisis due to the 4th industrial revolution. The developed AI technology will allow personalized training techniques to be ported to the emerging jobs. Specifically, the personalized training and assessment generation can be used across workers with various backgrounds and areas, focusing only on what workers need to learn, skipping what they have already mastered, which can cut training time dramatically while boosting knowledge and skill acquisition, build workers self-awareness and self-confidence, and ensure the fair treatment of workers. Whilst the immediate test beds are in mining, metal processing and manufacturing, which take advantage of existing core education and research strengths and our industrial consortia and stakeholder industries, the impact of the tools can be tailored for other fields in engineering. Through the multidisciplinary cooperation across academia, industry, and education fields, this project will enhance the scientific understanding of future worker training and forms a pedagogical convergence of AI techniques, educational sciences, and traditional engineering & sciences, which meet the future interdisciplinary job requirements.
This Convergence Accelerator Phase I project will replicate personalized tutoring methods with an AI-enabled approach that will allow for automatic assessment of the skills and gaps for displaced workers and for fast, fair, and cost-effective training at scale to place them in new jobs. We will focus on three research objectives that are mutually dependent for achieving our research goal. First, the multidisciplinary cooperation across academia, industry, and education fields will enhance our scientific understanding of future worker training and will define meaningful model structure, featurized inputs, and assessment metrics for the AI tool. Second, a new learning approach will modularly learn and transfer knowledge from available worker-job combinations to generate the customized training program for a given new worker-job combination that was not seen during training. Lastly, the team will develop a solid plan which will integrate the AI-enabled tool with the existing university and industrial training programs and pave the way for practical deployment of the AI tool in industry. The deliverables of this project will serve the US needs across the entire materials supply chain sector and develop a diverse, globally competitive STEM workforce.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.907 |
2021 — 2024 |
Zhang, Xiaoli |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
A Framework For Semi-Autonomous in-Hand Telemanipulation @ Colorado School of Mines
This award supports research to develop and test a control framework for stable in-hand telemanipulation, using a physic-informed machine learning approach that can adapt the telerobotic controller to physiological limitations of individual users. In daily life, in-hand manipulation is needed to complete seemingly simple tasks such as adjusting the orientation of a power adaptor for plugging in to an outlet, as well as to complete complex skilled tasks such as manipulating a scalpel during a surgical procedure. The long-term objective of the line of research initiated with this project is to enable dexterous in-hand manipulation in tele-operation scenarios in which the robot can understand how to track real-time changes in human finger motions and use that information to actively ensure the stability of an in-hand object while it is being manipulated. This project promotes the progress of science and advances the national health, prosperity, and welfare by developing and testing novel methods implementing stable shared human and machine control of hand-held objects in tele-operation scenarios, such as tele-surgery, healthcare assistive robotics, and remote search and rescue. The project will also support outreach activities through an existing K-12 program at the Colorado School of Mines, at a local community college, and at STEM camp for girls.
This project seeks to solve challenges associated with in-hand telemanipulation through the development and testing of a control framework that utilizes physic-informed hierarchical machine learning approaches to adapt the telerobot to the physiological limitations of the individual users. There are three research objectives: 1) to use physics-informed metrics to guide the robot control policy to generate stable grasp configurations; 2) to optimize interpretation of signals derived from the human hand motion tracking system using a hierarchical learning model; and 3) to personalize shared control of the manipulation task between the human and the robot through active machine learning guided by the user's corrective adjustments in real-time. The project team will use a commercially available robotic hand system with the semi-autonomous framework to conduct human-subject-involved evaluation. Subjects will perform tele-manipulation tasks of increasing complexity, ranging from the relative simplicity of a jar opening task to a more complex case requiring in-hand changes of tool position and orientation. The work promises to advance the science and engineering of human-robot cooperation through a novel semi-autonomous control framework that can support complex in-hand object manipulation for teleoperation.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.907 |