Robots impact humans by displacing workers. Some expect this to occur at an increasing rate, leading to proposed solutions such as basic income. Robotics is itself a lucrative business that creates careers, especially for postgraduates. Roboticists often aim to create machines that seem to interface naturally with humans. The field is under active research and development, with areas of interest including robot kinematics and quantum robotics.
Robotics usually combines four aspects of design work to create a robot:
Power source: Potential energy sources include wired electricity, a battery, and/or petrol.
Mechanical construction: A physical form or combination of forms is designed to functionally achieve tasks within a given range of environments. This can include locomotive elements such as wheels and caterpillar tracks, as well as hydraulic limbs and manipulators (e.g. hands).
Many different types of batteries can be used as a power source. Most are lead–acid batteries, which are safe and have relatively long shelf lives but are rather heavy compared to silver–cadmium batteries, which are much smaller in volume and much more expensive. Designing a battery-powered robot needs to take into account factors such as safety, cycle lifetime, and weight.
Generators, often some type of internal combustion engine, can also be used, but are often mechanically complex and inefficient. Additionally, a tether could connect the robot to a power supply, saving weight and space, but requiring a cumbersome cable.[3] Potential power sources include:
Actuators are the "muscles" of a robot, the parts which convert stored energy into movement.[4] The most popular actuators are electric motors that rotate a wheel or gear and linear actuators that control factory robots. Most robots use electric motors—often brushed and brushless DC motors in portable robots or AC motors in industrial robots and computer numerical control machines—especially in systems with lighter loads and where the predominant form of motion is rotational. Meanwhile, linear actuators move in and out and often have quicker direction changes, particularly when large forces are needed, such as with industrial robotics. They are typically powered by oil or compressed air, but can also be powered by electricity, usually via a motor and a leadscrew. The mechanical rack and pinion is common.
Recent alternatives to DC motors are piezoelectric motors, including ultrasonic motors, in which tiny piezoceramic elements vibrate many thousands of times per second, causing linear or rotary motion. One type uses the vibration of the piezo elements to step the motor in a circle or a straight line;[5] another type uses the piezo elements to vibrate a nut or drive a screw. The advantages of these motors are nanometer resolution, speed, and force for their size.[6][7][8]
Series elastic actuation (SEA) relies on introducing intentional elasticity between the motor actuator and the load for robust force control. Due to the resultant lower reflected inertia, series elastic actuation improves safety during robot interactions or collisions.[9] Further, it provides energy efficiency and shock absorption (mechanical filtering) while reducing excessive wear on the transmission and other components. This approach has successfully been employed in various robots, particularly advanced manufacturing robots[10] and walking humanoid robots.[11][12] The controller design of a series elastic actuator is most often performed within the passivity framework as it ensures the safety of interaction with unstructured environments.[13] However, this framework suffers from stringent limitations imposed on the controller, which may impact performance.[14][verification needed]
Pneumatic artificial muscles, also known as air muscles, are special tubes that expand (typically up to 42%) when air is forced inside them; they are used in some robot applications.[15][16][17] Muscle wire, also known as shape memory alloy, is a material that contracts (under 5%) when electricity is applied; they have been used for some small robots.[18][19]Electroactive polymers are a plastic material that can contract substantially (up to 380% activation strain) from electricity and have been used in the facial muscles and arms of humanoid robots,[20] as well as to enable new robots to float,[21] fly, swim or walk.[22] Additionally, elastic carbon nanotubes are a promising experimental artificial muscle technology. The absence of defects in carbon nanotubes enables these filaments to deform elastically by several percent, with energy storage levels of perhaps 10 J/cm3 for metal nanotubes. Human biceps could be replaced with wire of this material measuring 8 millimetres (3⁄8 in) in diameter, feasibly allowing future robots to outperform humans.[23]
Robots with only one or two wheel(s) can have advantages such as greater efficiency, reduced parts, and navigation through confined areas. A one-wheeled robot balances on a round ball; Carnegie Mellon University's Ballbot is the approximate height and width of a person.[24][25] Several attempts have also been made to build spherical robots (also known as orb bots[26] or ball bots),[27] which move by spinning a weight inside the ball[28][29] or rotating outer shells.[30][31] Two-wheeled balancing robots generally use a gyroscope to detect how much a robot is falling and drive the wheels proportionally up to hundreds of times per second to counterbalance the fall, based on inverted pendulum dynamics.[32][33]NASA's Robonaut has been mounted to a Segway for a similar effect.[34] Most mobile robots have four wheels or continuous tracks. Six wheels can give better traction in outdoor terrain, while tracks provide even more grip. Tracked wheels are common for outdoor off-road robots, but are difficult to use indoors.[35] A small number of skating robots have been developed, one of which is a multimodal walking and skating device with four legs and unpowered wheels.[36][37]
Several robots have been made that can walk on two legs, but not yet as reliably as a human.[38] Many other robots have been built that walk on more than two legs, being significantly easier.[39][40] Walking robots could be used for uneven terrains, providing a high degree of mobility and efficiency, but two-legged robots can currently only handle flat floors or perhaps stairs. Some approaches have included:
The zero moment point (ZMP) is the algorithm used by robots such as Honda's ASIMO. The robot's onboard computer tries to keep the total inertial forces (the combination of Earth's gravity and the acceleration and deceleration of walking) exactly opposed by the floor reaction force (the force of the floor pushing back on the robot's foot). In this way, the two forces cancel out, leaving no moment (force causing the robot to rotate and fall over).[41] Human observers note that this is not exactly how a human walks, with some describing ASIMO's walk as looking like it needs use the bathroom.[42][43][44] ASIMO's walking algorithm utilizes some dynamic balancing, but requires a flat surface.
Several robots, built in the 1980s by Marc Raibert at the MIT Leg Laboratory, successfully demonstrated very dynamic walking. Initially, a robot with only one leg, and a very small foot could stay upright simply by hopping. The movement is the same as that of a person on a pogo stick. As the robot falls to one side, it would jump slightly in that direction to catch itself.[45] Soon, the algorithm was generalized to two and four legs. A bipedal robot was demonstrated running and even performing somersaults.[46] A quadruped was also demonstrated which could trot, run, pace, and bound.[47][48]
A more advanced approach is a dynamic balancing algorithm, which constantly monitors the robot's motion and places the feet to maintain stability.[49] This technique has been demonstrated by Anybots' Dexter robot[50] (which is so stable it can perform jumps)[51] and Delft University's Flame.
Perhaps the most promising approach uses passive dynamics, in which the momentum of swinging limbs is used to power walking, perhaps increasing efficiency to ten times that of ZMP.[52][53]
A modern passenger airliner is essentially a flying robot, with two humans to manage it. The autopilot can control the plane through takeoff, normal flight, and landing.[54]Unmanned aerial vehicles (UAVs) can be smaller and lighter and fly into dangerous territory for military use, perhaps even being triggered to fire automatically. Other flying robots include cruise missiles, the entomopter, and the Epson micro helicopter robot. Additionally, some lighter-than-air robots are propelled by paddles and guided by sonar.
Biomimetic flying robots (BFRs) take inspiration from flying mammals, birds, or insects. They can have flapping wings, which generate the lift and thrust, or they can be propeller-actuated. Flapping-wing designs have increased maneuverability and reduced energy consumption compared to propeller actuation.[55] BFRs inspired by mammals and birds share similar flight characteristics and design considerations. For instance, they minimize edge fluttering and pressure-induced wingtip curl by increasing the rigidity of the wing edge.
Mammal-inspired BFRs typically take inspiration from bats, with the flying squirrel also inspiring a prototype.[56][57][58] Mammal-inspired BFRs can be designed to be multimodal; being capable of both flight and terrestrial movement. Shock absorbers can be implemented to reduce the impact of landing.[58] Alternatively, the BFR can pitch up and increase the amount of drag.[56] Different land gait patterns can also be implemented.[56]
Bird-inspired BFRs can take inspiration from raptors, gulls, and others.[59][60] They can be feathered to increase the angle of range over which the robot can operate before stalling.[61] The wings of bird-inspired BFRs allow for in-plane deformation, which can be adjusted to maximize flight efficiency depending on the flight gait.[61]
Insect-inspired BFRs typically take inspiration from beetles or dragonflies.[62][63][64]
Capuchin, a climbing robot
Several different approaches have been used to develop robots that have the ability to climb vertical surfaces. One approach mimics the movements of a human climber on a wall with protrusions; adjusting the center of mass and moving each limb in turn to gain leverage.[65] Another approach uses the specialized toe-pad method of wall-climbing geckoes, which can run on smooth surfaces such as vertical glass,[66][67] one example being named Speedy Freelander. A third approach is to mimic the motion of a snake climbing a pole.[68] Separately, snake robots can be used for horizontal navigation, possibly being able to search through confined spaces[69] and navigate amphibiously.[70][71]
It is calculated that when swimming, some fish can achieve a propulsive efficiency greater than 90%.[72] Furthermore, they can accelerate and maneuver far better than any man-made boat or submarine and cause less disturbance, being a desirable ability for aquatic robots,[73][74] one of which models fish locomotion.[75] One example copies the streamlined shape and propulsion of the front 'flippers' of penguins.[76] Others emulate the locomotion of the manta ray and jellyfish. In 2014, a robotic fish outperformed some real fish in average maximum velocity and endurance.[77][78][79]
Sailboat robots, such as Vaimos, have been developed in order to make measurements at the surface of the ocean.[80] Since saiboat robots are wind-propelled, the batteries only power the computer, communication and actuators (to tune the rudder and sail). Two major sailboat robot competitions occur at the Microtransat Challenge and the World Robotic Sailing Championship.
A definition of robotic manipulation has been described by Matthew T. Mason as the robot's "control of its environment through selective contact".[81] Robots need to manipulate objects; pick up, modify, destroy, move or otherwise have an effect. Thus a robotic arm is referred to as a manipulator[82] and its functional end (e.g. a tool or hand) is known as an end effector.[83] Most robot arms have replaceable end effectors, each allowing them to perform some small range of tasks. Some have a fixed manipulator that cannot be replaced, including highly versatile manipulators like a humanoid hand.[84][85][86] Some of these have powerful dexterity intelligence, up to 20 degrees of freedom, and hundreds of tactile sensors.[87]
One of the most common types of end effectors are 'grippers'. In its simplest manifestation, it consists of just two fingers that can open and close to pick up and let go of small objects. Fingers can be made of a chain with a metal wire running through it.[88] Hands that resemble and work more like a human hand include the Shadow Hand and the Robonaut hand.[89][90][91] Mechanical grippers can come in various types, including friction and encompassing jaws. Friction jaws use all the force of the gripper to hold the object in place using friction. Encompassing jaws cradle the object in place, using less friction.
Suction end effectors, powered by vacuum generators, are very simple astrictive[92] devices that can hold very large loads provided the prehension surface is smooth enough to ensure suction. Pick-and-place robots for electronic components and for large objects like car windscreens, often use very simple vacuum end effectors. Suction is a highly used type of end effector in industry, in part because the natural compliance of soft suction end effectors can be less likely to damage objects.
The mechanical structure of a robot must be controlled to perform tasks.[93] The control of a robot involves three distinct phases – perception, processing, and action (robotic paradigms).[94] Sensors give information about the environment or the robot itself (e.g. the position of its joints or its end effector).[95] This information is then processed to be stored or transmitted and to calculate the appropriate signals to the actuators (motors), which move the mechanical structure to achieve the required coordinated motion or force actions.
The processing phase can range in complexity. At a reactive level, it may translate raw sensor information directly into actuator commands (e.g. firing motor power electronic gates based directly upon encoder feedback signals to achieve the required torque/velocity of the shaft). Sensor fusion and internal models may first be used to estimate parameters of interest (e.g. the position of the robot's gripper) from noisy sensor data. An immediate task (such as moving the gripper in a certain direction until an object is detected with a proximity sensor) is sometimes inferred from these estimates. Techniques from control theory are generally used to convert the higher-level tasks into individual commands that drive the actuators, most often using kinematic and dynamic models of the mechanical structure.[93][94][96]
At longer time scales or with more sophisticated tasks, the robot may need to build and reason with a cognitive model, which try to represent the robot, the world, and how the two interact. Pattern recognition and computer vision can be used to track objects.[93]Mapping, motion planning and other AI techniques may be used to figure out how to act and avoid obstacles.
Modern commercial robotic control systems are highly complex, integrate multiple sensors and effectors, have many interacting degrees of freedom and require operator interfaces, programming tools and real-time capabilities.[94] They are often connected to wider communication networks, including the Internet of things, a network correlating physical objects.[97] Progress towards open architecture, layered, user-friendly and 'intelligent' sensor-based interconnected robots has emerged from earlier concepts related to flexible manufacturing systems. Further, several 'open or 'hybrid' reference architectures provide advantages over prior 'closed' robot control systems.[96] Open-architecture controllers are said to be better able to meet the growing requirements of a wide range of robot users, including system developers, end users and research scientists, and are better positioned to contribute advanced industrial concepts.[96] In addition to utilizing many established features of robot controllers, such as position, velocity and force control of end effectors, they also enable interconnection and the implementation of more advanced sensor fusion and control techniques, including adaptive control, fuzzy control and artificial neural network–based control.[96] When implemented in real time, such techniques can potentially enabling more adaptive control systems working in unfamiliar environments.[98] Generic reference architecture and associated interconnected, open-architecture robot and controller implementation has been used in a number of studies.[98][99]
A color sensor on a robot
Sensors allow robots to receive information about the environment or internal components. This is essential for robots to perform their tasks and respond to changes with the appropriate response. Sensors are used for various forms of measurements, to provide real-time information, and to give the robots warnings; they can include cameras and microphones, as well as those that monitor network signals, power level, pressure, and temperature.
Current robotic and prosthetic hands receive far less tactile information than the human hand. Recent research has developed a tactile sensor array that mimics the mechanical properties and touch receptors of human fingertips.[100][101] The sensor array is constructed as a rigid core surrounded by conductive fluid contained by an elastomeric skin. Electrodes are mounted on the surface of the rigid core and connected to an impedance-measuring device within the core. When the artificial skin touches an object, the fluid path around the electrodes is deformed, producing impedance changes that map the forces received from the object. An important function of artificial fingertips will likely be adjusting the grip on held objects. Scientists from several European countries and Israel developed a prosthetic hand in 2009 which functions like a real one—allowing patients to write, type on a keyboard, and perform other fine movements. The prosthesis has sensors which enable the patient to sense through its fingertips.[102]
Other common forms of sensing in robotics use lidar, radar, and sonar.[68] Lidar measures the distance to a target by illuminating the target with laser light and measuring the reflected light with a sensor. Radar uses radio waves to determine the range, angle, or velocity of objects. Sonar uses sound propagation to navigate, communicate with or detect objects on or under the surface of the water.
Computer vision is the AI interpretation of image data. This can take many forms, such as video. In most practical computer-vision applications, the computers are programmed to solve a particular task, but methods based on machine learning are becoming increasingly common. Computer-vision systems rely on image sensors that detect electromagnetic radiation, typically in the form of visible or infrared light. The sensors are designed using solid-state physics. The process by which light propagates and reflects off surfaces is explained using optics. Sophisticated image sensors require quantum mechanics to provide a complete understanding of the image-formation process. Robots can also be equipped with multiple vision sensors to be better able to compute depth. Like human eyes, robotic eyes must also be able to focus on particular areas of interest and adjust to variations in light intensity.
There is a subfield within computer vision wherein artificial systems are designed to mimic the processing and behavior of a biological system at different levels of complexity. More abstractly, robot forms inspired by origami are designed to sense and analyze in extreme environments.[103]
A robot with remote-control programming, possibly operated by haptic or teleoperated devices, has a preexisting set of commands that it will only perform when it receives a signal from a control source—essentially a form of automation, with humans having nearly complete control over the robot.
Meanwhile, AI-supported autonomous robots operate without a control source and can use their programming to determine responses to various stimuli.[104] They do not require complex cognition, e.g. industrial robots that carry out repetitive tasks in assembly plants. The operator may simply select tasks or certain modes of operation, which the robot performs automatically.
Hybrid robots may be assisted by an operator who commands certain moves or actions, which the robot uses its programming to perform.[105]
For effective use in domestic environments, the way robots receive commands should be intuitive even for people with no technological skillset. Science-fiction authors and futurists often envision humans communicating with robots via speech, gestures, and facial expressions rather than a command-line interface.[107][additional citation(s) needed] Studies have shown that, for some people, interacting with a robot or imagining doing so can reduce negative feelings they may have about robots,[108] but this can also bolster strong negative prejudices.[108] Researchers are trying to create robots that demonstrate personality,[109][110] regardless of whether this is desirable in commercial machines.[111] Sounds, facial expressions, and body language can be used to convey emotions, e.g. in the toy robot dinosaur Pleo (c. 2006).[112] Further, robots may incorporate awareness of personal space to their interactions.
Other hurdles exist when a voice is used to interact with humans. For social reasons, synthetic voice proves suboptimal as a communication medium,[113] making it necessary to develop the emotional component of robotic voice through various techniques.[114][115] One of the earliest examples is a teaching robot developed in 1974 by Michael J. Freeman,[116][117] who converted digital memory to rudimentary verbal speech via pre-recorded computer discs.[118] Freeman's robot was programmed to teach students in The Bronx, New York.[118]
Meanwhile, recognizing human speech in real time is a difficult task for a computer, mostly because of speech's great variability.[119] The sound of a word can vary greatly depending on accent, acoustics, volume, the previous word spoken, and the speaker's health.[120] Strides have been made in the field since the first "voice input system" was designed in 1952.[121] By the end of the 20th century, the best systems could recognize continuous, natural speech up to 160 words per minute with 95% accuracy.[122] AI-assisted machines can use voice to identify emotions.[123] Social robots will likely need to be able to recognize gestures (and perhaps perform them) to assist verbal communication.[124][125] The processing and simulation of emotions by AI is known as affective computing.
A robot should be able to interact with a human appropriately based upon their facial expressions and body language. Expressive synthetic faces have been constructed by Hanson Robotics using an elastic polymer (rubber) skin mesh animated by subsurface motors (servos), which are in turn embedded on a metal skull.[126] Robots like Kismet can produce a range of facial expressions, enabling engagement in meaningful social exchanges.[127][128] The interactive Robin the Robot [hy] similarly uses AI-based analysis and displays emotions to try to overcome exhibitions of stress and anxiety.[129]
Current and potential applications of robots include:
Energy applications including cleanup of nuclear contaminated areas[a] and cleaning solar panel arrays
Food processing, including commercial production of burgers, pizza, salads, frozen yogurts, coffee, and cocktails.[137]Spyce Kitchen ran two robotic food-bowl restaurants in Massachusetts (2018–2022).[138]
Industrial robots for manufacturing and assembly: Robots have been increasingly used in manufacturing since the 1960s. According to the Robotic Industries Association US data, in 2016 the automotive industry was the main customer of industrial robots with 52% of total sales.[139] They can perform over half of the labor in the auto industry, including heavy duty such as car assembly.[140] By 2003, an IBM keyboard manufacturing factory in Texas was fully automated as a "lights out" factory.[141]
A robot technician builds small all-terrain robots (courtesy: MobileRobots, Inc.).
The incorporation of robots into industries has increased efficiency and productivity. It is typically seen as a long-term investment for benefactors and perhaps even an essential component of manufacturing. However, it has the potential of replacing most of the work performed by humans, with a 2017 study finding that automation alone puts 47% of US jobs at eventual risk.[146] Robotics is thus often used as an argument for basic income to replace lost wages. Theoretical physicist Stephen Hawking observed in 2016:[147]
The automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.
As of 2022, China had the greatest number of industrial robots in operation with 1.5 million units and was increasing that figure by more than 20% annually.[148]
The spread of robotics presents both opportunities and challenges for occupational safety and health (OSH).[149] Despite lost wages, the substitution of people working in unhealthy or dangerous environments is an OSH benefit. This not only includes high-risk jobs in space, security, and energy, but also dirty or unsafe work in logistics, maintenance, and inspection that requires exposure to physical and/or psychosocial risks, including those stemming from repetitive or monotonous tasks better suited to machines. Robots are likely to gradually replace such jobs in other sectors like agriculture, cleaning, construction, firefighting, healthcare, and transportation.[150]
On the other hand, humans are better suited than machines for light-duty jobs involving various levels of creativity, decision-making, and flexibility. Humans and robots increasingly work in parallel within their areas of expertise. The need to work safely in a close space has resulted in cobots (collaborative robots).[151][152] Some European countries are including robotics in their national programs, promoting healthy cooperation between robots and operators to increase productivity.[153]
Robotics is an interdisciplinary field, primarily combining mechanical engineering and computer science but also drawing on electronic engineering and other subjects. Undergraduate degrees are usually obtained in one of these subjects prior to the pursuance of a graduate degree in robotics. Robotics engineers design and maintain robots, develop new applications, and conduct research.[154] As of 2011, the number of robotics-related jobs was steadily rising as factories increasingly utilized robots.[155] According to a September 2021 GlobalData report, the robotics industry was worth USD $45 billion in 2020, and by 2030 it will have grown at a compound annual growth rate of 29% to $568 bn, driving jobs in robotics and related industries.[156]
Much of the research in robotics focuses not on specific industrial tasks, but on investigations into new types of robots, alternative ways to think about or design robots, and new ways to manufacture them. In 1997, Professor Hans Moravec, principal research scientist at the Carnegie's Robotics Institute, predicted that robot intelligence would reach the capacity of a lizard by 2010, a mouse by 2020, then a monkey and finally a human by around 2045.[157]
The study of motion can be divided into kinematics and dynamics.[158] Direct or forward kinematics refers to the manual control of joints to manipulate end effectors, while in inverse kinematics, end-effector states are predetermined and the joint values automated. Kinematics encompasses calculation efficiency, collision avoidance, and stalling prevention. Meanwhile, dynamics are used to study the effect of forces upon given kinematic motions. Direct dynamics refers to the calculation of accelerations once the applied forces are known, used in computer simulations. Inverse dynamics refers to the calculation of the actuator forces that result in certain end-effector accelerations.
Open-source robotics research seeks standards for defining, and methods for designing and building, robots so that they can easily be reproduced by anyone. Research includes legal and technical definitions; seeking out alternative tools and materials to reduce costs and simplify builds; and creating interfaces and standards for designs to work together. Human usability research also investigates how to best document builds through visual, text or video instructions.
Evolutionary robotics is a methodology that uses evolutionary computation to help design robots, especially the body form, or motion and behavior controllers. In a similar way to natural evolution, a large population of robots is allowed to compete in some way, or their ability to perform a task is measured using a fitness function. Those that perform worst are removed from the population and replaced by a new set with behaviors based on those of the winners. Over time the population improves and eventually a satisfactory robot may appear without direct human intervention. Researchers use this method both to create better robots[159] and to explore the nature of evolution.[160] Because the process often requires many generations of robots to be simulated,[161] this technique may be run entirely or mostly in simulation before testing the evolved algorithms on real robots.[162]
Bionics and biomimetics apply the physiology and methods of locomotion of animals to the design of robots. For example, the design of BionicKangaroo was based on the way kangaroos jump.
Swarm robotics is an approach to the coordination of multiple robots as a system which consist of large numbers of mostly simple physical robots. According to one source, "In a robot swarm, the collective behavior of the robots results from local interactions between the robots and between the robots and the environment in which they act."[106][attribution needed]
^Roozing, Wesley; Li, Zhibin; Tsagarakis, Nikos; Caldwell, Darwin (2016). "Design Optimisation and Control of Compliant Actuation Arrangements in Articulated Robots for Improved Energy Efficiency". IEEE Robotics and Automation Letters. 1 (2): 1110–1117. Bibcode:2016IRAL....1.1110R. doi:10.1109/LRA.2016.2521926. S2CID1940410.
^Otake, Mihoko; Kagami, Yoshiharu; Ishikawa, Kohei; Inaba, Masayuki; Inoue, Hirochika (6 April 2001). Wilson, Alan R.; Asanuma, Hiroshi (eds.). "Shape design of gel robots made of electroactive polymer gel". Smart Materials. 4234: 194–202. Bibcode:2001SPIE.4234..194O. doi:10.1117/12.424407. S2CID30357330.
^Pratt, G. A.; Williamson, M. M. (1995). "Series elastic actuators". Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human-Robot Interaction and Cooperative Robots. Vol. 1. pp. 399–406. doi:10.1109/IROS.1995.525827. hdl:1721.1/36966. ISBN0-8186-7108-4. S2CID17120394.
^Pratt, Jerry E.; Krupp, Benjamin T. (2004). "Series Elastic Actuators for legged robots". In Gerhart, Grant R; Shoemaker, Chuck M; Gage, Douglas W (eds.). Unmanned Ground Vehicle Technology VI. Vol. 5422. pp. 135–144. doi:10.1117/12.548000. S2CID16586246.
^Li, Zhibin; Tsagarakis, Nikos; Caldwell, Darwin (2013). "Walking Pattern Generation for a Humanoid Robot with Compliant Joints". Autonomous Robots. 35 (1): 1–14. doi:10.1007/s10514-013-9330-7. S2CID624563.
^Colgate, J. Edward (1988). The control of dynamically interacting systems (Thesis). hdl:1721.1/14380.
^Calanca, Andrea; Muradore, Riccardo; Fiorini, Paolo (November 2017). "Impedance control of series elastic actuators: Passivity and acceleration-based control". Mechatronics. 47: 37–48. doi:10.1016/j.mechatronics.2017.08.010.
^Tondu, Bertrand (2012). "Modelling of the McKibben artificial muscle: A review". Journal of Intelligent Material Systems and Structures. 23 (3): 225–253. doi:10.1177/1045389X11435435. S2CID136854390.
^Collins, S. H.; Ruina, A. (2005). "A Bipedal Walking Robot with Efficient and Human-Like Gait". Proceedings of the 2005 IEEE International Conference on Robotics and Automation. pp. 1983–1988. doi:10.1109/ROBOT.2005.1570404. ISBN0-7803-8914-X. S2CID15145353.
^Hu, Zheng; McCauley, Raymond; Schaeffer, Steve; Deng, Xinyan (May 2009). "Aerodynamics of dragonfly flight and robotic design". 2009 IEEE International Conference on Robotics and Automation. pp. 3061–3066. doi:10.1109/ROBOT.2009.5152760. ISBN978-1-4244-2788-8. S2CID12291429.
^G. J. Monkman, S. Hesse, R. Steinmann & H. Schunk (2007). Robot Grippers. Berlin, Germany: Wiley.
^Tijsma, H. A.; Liefhebber, F.; Herder, J. L. (2005). "Evaluation of New User Interface Features for the MANUS Robot Arm". 9th International Conference on Rehabilitation Robotics, 2005. ICORR 2005. pp. 258–263. doi:10.1109/ICORR.2005.1501097. ISBN0-7803-9003-2. S2CID36445389.
^ abWullenkord, Ricarda; Fraune, Marlena R.; Eyssel, Friederike; Sabanovic, Selma (2016). "Getting in Touch: How imagined, actual, and physical contact affect evaluations of robots". 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). pp. 980–985. doi:10.1109/ROMAN.2016.7745228. ISBN978-1-5090-3929-6. S2CID6305599.
^Park, S.; Sharlin, Ehud; Kitamura, Y.; Lau, E. (29 April 2005). Synthetic Personality in Robots and its Effect on Human-Robot Relationship (Report). doi:10.11575/PRISM/31041. hdl:1880/45619.
^Pauletto, Sandra; Bowles, Tristan (2010). "Designing the emotional content of a robotic speech signal". Proceedings of the 5th Audio Mostly Conference on a Conference on Interaction with Sound – AM '10. pp. 1–8. doi:10.1145/1859799.1859804. ISBN978-1-4503-0046-9. S2CID30423778.
^Bowles, Tristan; Pauletto, Sandra (2010). Emotions in the Voice: Humanising a Robotic Voice(PDF). Proceedings of the 7th Sound and Music Computing Conference. Barcelona. Archived(PDF) from the original on 2023-02-10. Retrieved 2023-03-15.
^Norberto Pires, J. (December 2005). "Robot-by-voice: experiments on commanding an industrial robot using the human voice". Industrial Robot. 32 (6): 505–511. doi:10.1108/01439910510629244.
^Fournier, Randolph Scott; Schmidt, B. June (1995). "Voice input technology: Learning style and attitude toward its use". Delta Pi Epsilon Journal. 37 (1): 1–12. ProQuest1297783046.
^Cheng Lin, Kuan; Huang, Tien-Chi; Hung, Jason C.; Yen, Neil Y.; Ju Chen, Szu (7 June 2013). "Facial emotion recognition towards affective computing-based learning". Library Hi Tech. 31 (2): 294–307. doi:10.1108/07378831311329068.
^Waldherr, Stefan; Romero, Roseli; Thrun, Sebastian (1 September 2000). "A Gesture Based Interface for Human-Robot Interaction". Autonomous Robots. 9 (2): 151–173. doi:10.1023/A:1008918401478. S2CID1980239.
^Saad, Ashraf; Kroutil, Ryan (2012). Hands-on Learning of Programming Concepts Using Robotics for Middle and High School Students. Proceedings of the 50th Annual Southeast Regional Conference of the Association for Computing Machinery. ACM. pp. 361–362. doi:10.1145/2184512.2184605.
^Hunt, V. Daniel (1985). "Smart Robots". Smart Robots: A Handbook of Intelligent Robotic Systems. Chapman and Hall. p. 141. ISBN978-1-4613-2533-8. Archived from the original on 2023-03-15. Retrieved 2018-12-04.
^Arámbula Cosío, F.; Hibberd, R. D.; Davies, B. L. (July 1997). "Electromagnetic compatibility aspects of active robotic systems for surgery: the robotic prostatectomy experience". Medical and Biological Engineering and Computing. 35 (4): 436–440. doi:10.1007/BF02534105. ISSN1741-0444. PMID9327627. S2CID21479700.
^Frey, Carl Benedikt; Osborne, Michael A. (January 2017). "The future of employment: How susceptible are jobs to computerisation?". Technological Forecasting and Social Change. 114: 254–280. CiteSeerX10.1.1.395.416. doi:10.1016/j.techfore.2016.08.019.
^Žlajpah, Leon (15 December 2008). "Simulation in robotics". Mathematics and Computers in Simulation. 79 (4): 879–897. doi:10.1016/j.matcom.2008.02.017.
Autor, David H. (1 August 2015). "Why Are There Still So Many Jobs? The History and Future of Workplace Automation". Journal of Economic Perspectives. 29 (3): 3–30. doi:10.1257/jep.29.3.3. hdl:1721.1/109476.