1991 — 1993 |
Radwin, Robert G |
K01Activity Code Description: For support of a scientist, committed to research, in need of both advanced research training and additional experience. |
Characterization of Posture, Force &Repetitive Motion @ University of Wisconsin Madison
Cumulative trauma disorders are caused, aggrevated and precipatitated by repetitive exertions and movements workers are routinely required to perform, and awkward postures they must assume. Currently there are no quantitative standards available for protecting workers from excessive exposure to these hazards. But before dose-reponse relationships can be established leading to exposure standards for preventing these disorders, practical methods are needed for measuring and characterizing worker exposure to cumulative trauma stress factors. This project will develop efficient analytical techniques for assessing physical stress strain associated with hand-intensive tasks containing repetitive motion and forceful exertions. A practical instrumentation system will be implemented for continuously measuring wrist posture and hand force. Use of spectral analysis will be investigated characterizing task repetitiveness, forcefulness, and postural stress. Data will be transformed into the frequency domain for determining the frequency and magnitude of repetitive motion and forceful exertions performed during manual work. Psychophysical measures of localized fatigue and discomfort will be correlated with posture and force spectral components for designing digital filters for frequency-weighted physical stress measurements for controlling strain associated with repetitive tasks. Finally capability for characterizing and reducing stress and strain associated with repetitive motion exertions will be tested using both industrial tasks simulated in the laboratory and actual tasks performed in the workplace.
|
1 |
1996 — 2000 |
Radwin, Robert G |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
Worker Monitoring Tests For Carpal Tunnel Syndrome @ University of Wisconsin Madison
DESCRIPTION (Adapted From the Investigator's Abstract): This proposal investigates using a battery of psychomotor and sensory tests for periodically monitoring industrial workers for subtle functional deficits associated with carpal tunnel syndrome (CTS). Although these tests are not intended for diagnosis, they should be sufficiently sensitive to detect early CTS deficits, yet be more specific for CTS than other monitoring instruments, such as self-reported or structured symptom surveys. Since these tests are automated and non-invasive, they may be suitable for routinely monitoring workers for early CTS, analogous to how audiometric tests are used for monitoring workers exposed to loud acoustic noise. Monitoring instruments are needed for active surveillance, for early detection of CTS before these cases progress into costly and disabling occupational illnesses, and for triggering ergonomics interventions. The Principal Investigator has already developed the biomedical instrumentation for this proposal. The tests were demonstrated to be reliable and sensitive for detection of psychomotor and sensory deficits associated with CTS. These tests were correlated with median nerve electrophysiologic parameters in a case-control study involving outpatients with electrophysiologically confirmed CTS, and a control group free of symptoms and electrodiagnostic parameters consistent with CTS. The proposal advances that work by proposing a prospective study for investigating if temporal changes in these test parameters can be observed among workers developing CTS symptoms. An industrial population of 250 workers exposed to risk factors associated with CTS from a variety of industries associated with high CTS incidence rates will be periodically tested for changes in psychomotor and sensory functional test parameters over a four year period. Workers will annually undergo examination for CTS, including electrophysiologic testing. Outcomes having positive/negative functional changes and positive/negative self reported symptoms will be ascertained and sensitivity, specificity and predictive value will be compared. In addition to these field investigations, new parameters for these instruments will be investigated using outpatient subjects and controls.
|
1 |
2012 — 2013 |
Radwin, Robert G |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Video Exposure Assessment of Hand Activity Level @ University of Wisconsin-Madison
DESCRIPTION (provided by applicant): This exploratory research investigates the feasibility of automatically evaluating the American Conference of Government Industrial Hygienists (ACGIH) Hand Activity Level (HAL) using digital video processing. We are developing feature extraction algorithms for quantifying HAL in real-time using conventional digital videos focused on the upper extremities. This research is significant because there is currently no practical instrument for objectively, unobtrusively, and efficiently measuring repetitive motion exposure for evaluating the risk of musculoskeletal injuries in the workplace. Current methods involve either direct measurements using instruments attached to a worker's hands or arms, or indirect observations. Both instrument and observation methods are mostly limited to research studies and are highly impractical for industry practitioners. The proposed approach is innovative because it uses digital video processing to measure repetitive motion exposure and because it leverages a vast data base of videos and associated exposure data already analyzed manually through collaboration with the University of California-Berkeley (UCB). UCB is making available 320 videos of workers in four industries, and their associated, analyses for this initial study. A proof-of-concept algorithm has already been tested. We first propose to further develop and refine video processing algorithms for automatically and continuously measuring HAL from standard digital video recordings focused on the upper extremities. The video algorithms will then be evaluated by comparing repetition frequency, duty cycle and HAL derived by our algorithm from video clips of laboratory subjects performing paced repetitive tasks or actual industrial workers, against similar outcomes obtained by a human analyst conducting a manual frame-by-frame multimedia video task analysis. Ultimately this translational research might lead to a video-based direct reading exposure assessment instrument with broad applications for occupational health and safety.
|
1 |
2012 — 2013 |
Radwin, Robert G |
R21Activity Code Description: To encourage the development of new research activities in categorical program areas. (Support generally is restricted in level of support and in time.) |
Video Motion Analysis of Repetitive Exertions @ University of Wisconsin-Madison
DESCRIPTION (provided by applicant): Upper extremity musculoskeletal injuries are common in hand intensive work involving highly repetitive motions and exertions. Previous exposure assessment methods involve either direct measurements using instruments attached to a worker's hands or arms, or indirect observations. Both methods however are highly impractical for occupational health practice. Compared to instruments, indirect observation lacks precision and accuracy, is not suitable for long observation periods, and requires considerable analyst time. Alternatively, attaching sensors on working hands is too time consuming, and sensors may interfere with normal working operation. We are developing novel real- time video processing algorithms to automatically and unobtrusively measure upper limb kinematics from conventional single camera digital video. In this exploratory research, we propose using a video feature extraction method to indirectly quantify hand activity using markerless video motion analysis. We are collaborating with the NIOSH Industry-wide Studies Branch, which is making more than 700 videos of 484 workers at three study locations, their associated observational analyses and longitudinal health outcome data available to us for this initial study. The real time video algorithms will first be developed and designed to be robust using sequential Bayesian estimation to track the motion of a general region of interest (ROI) on the upper extremities selected by the camera operator or analyst, and statistically estimating its velocity and acceleration. Video derived ROI kinematics from varying vantage points will then be compared against ground truth measurements obtained using a 3D infrared motion capture system, and conventional observational measurements of laboratory participants performing controlled repetitive motion tasks. We will also perform secondary analysis of videos from actual workplace activities taken and already studied by NIOSH, for algorithm refinement and for studying the correlation between the kinematic data and conventional observational exposure measures. We will evaluate the risk of an injury from the NIOSH health outcome data using our new exposure analysis method and compare the resulting dose-response model using the measured frequency, duty cycle, velocity and acceleration from our automatic video analysis method against conventional manual analysis in order to evaluate improvement over existing methods. This exploratory research could ultimately lead to a full-scale epidemiology and validation study of our video exposure assessment methodology that would leverage the existing NIOSH consortium database from seven participating laboratories in order to establish a model relating video derived kinematics and health outcome. The long-term goal is to develop a video-based direct reading exposure assessment instrument for upper limb repetitive motion activities that should be useful for evaluating the risk of injuries in the workplace, and for primry prevention.
|
1 |
2016 — 2018 |
Radwin, Robert G |
R01Activity Code Description: To support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing his or her specific interest and competencies. |
A Direct Reading Video Assessment Instrument For Repetitive Motion Stress @ University of Wisconsin-Madison
Project Summary/ Abstract This research studies if computer vision can more effectively evaluate worker exposure and assess the associated risk for work related injuries than conventional methods. Current methods involve either observations or measurements using instruments attached to a worker's hands or arms. Observation is often considered too subjective or inaccurate, and instruments too invasive or time consuming for routine applications in industry. Automated job analysis potentially offers a more objective, accurate, repeatable, and efficient exposure assessment tool than observational analysis. Computer vision uses less resources than instruments attached to workers and does not interfere with production; can quantify more exposure variables and interactions; is suitable for long-term, direct reading exposure assessment; and offers animated data visualizations synchronized with video for identifying aspects of jobs needing interventions. This research leverages the research from coordinated multi-institutional prospective studies of upper limb work related MSD conducted between 2001 and 2010 that studied production and service workers from a variety of US industries, and used rigorous case-criteria and individual-level exposure assessments prospectively, including recording detailed videos of the work. Our study partners from the National Institute for Occupational Safety and Health, the Washington State Labor & Industries Safety & Health Assessment & Research for Prevention program and the University of California-San Francisco will provide task-level videos, associated exposure variable data, and prospective health outcomes for 1,649 workers. Exposure properties directly measured from videos of jobs and corresponding health outcomes from the prospective study database will establish dose-response relationships to translate into a prototype automated job analysis instrument. We build on our previous success in developing video marker-less hand motion algorithms for estimating the ACGIH hand activity level, and reliable video processing methods for hand tracking under challenging viewing conditions. This proposal will refine and develop additional video algorithms, and analyze the videos to extract exposure measures for repetition, posture, exertions, and their interactions. The video extracted exposure measures will be compared against conventional observational exposure measures made by our collaborators. Video and corresponding observational data will be merged with the prospective health outcomes data to evaluate dose-response and to develop and validate parsimonious exposure risk models for an automated direct reading repetitive motion instrument. We will test if automation has better predictive capability than observation and also consider the accuracy and utility of computer vision analysis against conventional job analysis for selected industrial jobs. This proposal addresses the NIOSH cross-sector programs in Musculoskeletal Disorders as well as in Exposure Assessment. This translational research is in concurrence with the Research to Practice (r2P) initiative by developing technology to disseminate knowledge from recent NIOSH sponsored prospective studies on MSDs.
|
1 |
2019 — 2020 |
Radwin, Robert Li, Jingshan (co-PI) [⬀] Mutlu, Bilge (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Fw-Htf-P: Human-Robot Collaboration For Enhancing Work Capabilities @ University of Wisconsin-Madison
The research objective of this Future of Work at the Human-Technology Frontier (FW-HTF) project is to create a matchmaking tool to identify opportunities in manufacturing workplaces where robot capabilities can augment or enhance human workers to improve productivity and safety. This planning project will investigate the possibility to integrate human workers into work processes by modeling tasks from an existing database of critical tasks required to perform each of more than 1000 occupations within the US economy. The project team will also engage with industry to understand differences between the results of task modeling and real work. Successful project outcomes will include the advancement of knowledge around future collaborative technologies, as well as the advancement of fundamental knowledge around the relationship and co-existence between American workers and intelligent robotic workplace assistants. A series of intensive workshops will foster new research collaborations between experts in process optimization, labor economics, and public health to build a multidisciplinary research team for a long-term FW-HTF research initiative.
This planning project will apply state-of-the-art work-design and task-allocation methods and tools to jobs and occupations that show high potential for the integration of collaborative robots to gain a better understanding of the technical challenges in human-robot teaming. The research team will model a subset of candidate tasks from the US Department of Labor Occupational Information Network (O*NET) database and apply optimization-based tools to explore possibilities for human-robot teaming and to investigate the impact of human-robot teaming on both station and system level performance. The project team will analyze the effects of modeled work configurations on key factors such as productivity, ergonomics, and compatibility. Upon project completion, the team will better understand the discrepancy between work models and real-world work, as well as the measurement challenges and constraints in the integration of collaborative robots to existing manufacturing environments. Workshop activities will foster new research collaborations with experts in process optimization, labor economics, and public health to build a multidisciplinary research team for a long-term initiative. The ultimate goal of this project is to develop the necessary research personnel, research infrastructure, and foundational work to expand the opportunities for studying future technology, future workers, and future work at the level of a FW-HTF full research project.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |
2021 — 2023 |
Smeeding, Timothy Radwin, Robert Li, Jingshan (co-PI) [⬀] Mutlu, Bilge (co-PI) [⬀] Jacobs, Lindsay |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Fw-Htf-Rm: Human-Robot Collaboration For Manual Work @ University of Wisconsin-Madison
This Future of Work at the Human-Technology Frontier (FW-HTF) project advances the vision of robots that work alongside workers to augment human activities, create better working conditions, provide more job satisfaction, open new employment prospects, and result in greater productivity than manual labor or automation alone. Existing principles and methods for work design and automation often use robots to displace human workers and are inherently limited in their economic and social benefits. This research advances a model of robots as physical assistants, working in partnership with human workers to redefine human work. The objective is to create a "matchmaking" process to identify human-robot pairings that can significantly improve productivity, health, and safety, while maximizing human labor resources. A multidisciplinary approach is applied to existing manual tasks in the manufacturing sector that show high potential to benefit from the integration of collaborative robots, in order to gain a better understanding of the impact of teaming humans and intelligent robotic assistants at the individual, operations, and socioeconomic levels. Finally, the insight gained from investigating current job characteristics, worker skills, and robot capabilities, is applied to enhance the benefits of the next generation of manufacturing robots for the strength of the economy and the quality of life of the workforce. This research aims to provide an increased understanding of how human capabilities can be augmented and enhanced using collaborative robots for physical or cognitive work; a new computational framework to optimally match work characteristics and skills, human capabilities, and robot augmentations; innovative economic models of, and policy recommendations for, the transformation of human employment across industries through human-robot collaboration; and insight into how the next generation of robot systems should be designed. Existing manufacturing sector jobs and tasks that show high potential to benefit from the integration of collaborative robots will be studied to understand how human-robot pairings can significantly improve economic productivity, and worker health and safety. The technical challenges in human-robot teaming are considered in the larger context of worker wellbeing and production as well as the substitutability between labor and robotics, changes in demand for labor skills, and the response in labor demand and supply when technology changes for a specific process. The project includes laboratory simulations and expert review to validate the optimized human-robot work plans, and recommendations on how the next generation of robot systems should be designed to further augment human capabilities and productivity in work.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|
0.915 |