2016年12月13日星期二

Specialized Thermoplastics HPTPs

High-performance thermoplastics (HPTPs) are specialized polymers used for demanding applications, largely due to their thermal resistance in comparison to engineering thermoplastics (ETPs) such as nylons and polycarbonates (PC). One tradeoff, however, is that their prices are often higher.
In general, economics play a major role in product design, says Anthony Vicari, Advanced Materials Lead Analyst at Lux Research.
“When designing a product, companies will choose the cheapest material that meets their needs; an expensive HPTP like Polyetherimide (PEI) or Polyetheretherketone (PEEK) is not freely chosen if a cheaper ETP like PC can suffice.”
A thermoplastic is a plastic type made from polymer resins that becomes hard when cooled and a homogenized liquid when heated. However, when frozen, it becomes glass-like and easily fractures. A variety of thermoplastics exist, with each type varying in crystalline structure and density.
HPTPs are also typically defined as high-temperature thermoplastics with a melting point above 150 °C.
“A high melting point for a thermoplastic correlates with other important characteristics such as mechanical strength and chemical resistance/inertness,” says Vicari.
Commodity thermoplastics, ETPs, and HPTPs fall into a structure often depicted as a “thermoplastic performance pyramid” with HPTPs at the top. Vicari says that, in general, the higher the performance of the polymer, the higher its price and location on the pyramid. The pyramid itself is divided into four stages with each stage split into two categories: amorphous and semi-crystalline. The four stages of plastics from bottom to top are: commodity, engineering, high temperature, and extreme temperature.
Over time, advances in polymer quality, polymer processing, and product design are improving the performance of ETPs. This may blur the distinction between the higher performing ETPs like PC and the lower performing HPTPs like Polysulfone (PSU), making substitutions possible in some cases, says Vicari. The average yearly global growth rates for the major ETPs are around 4%.
Thermoplastics and Market Demands
“Demand for polymers in general continues to rise, and with that the demand for HPTPs,” Vicari says. “Across a broad spectrum of applications, companies continue to demand higher performing products, requiring higher performing materials.”
For example, PEEK is a linear thermoplastic that can be processed in a temperature range of 350-420°C with excellent chemical and temperature resistance and mechanical strength. It has applications in environments such as nuclear and chemical plants, oil and geothermal wells, and high-pressure steam valves. Major manufacturers include Victrex, Solvay, Evonik, and Arkema.
Key HPTP markets include medical, aerospace, electronics, automotive, oil and gas (O&G), and, more recently, 3-D printing or additive manufacturing.
Over the past three decades, the use of plastics in automotive applications has grown due to their light weight and, therefore, superior fuel efficiency in comparison to metal. HPTPs and ETPs also offer more design flexibility, helping to reduce development time and cost. Current trends are mainly influenced by cost and weight reduction, environmental considerations, and recycling from end-of-life vehicles.
At present, only some ETPs like polycarbonate can be recycled. “In general, recycling results in some loss of material properties compared to virgin material, so recycled plastics are used in less demanding applications,” says Vicari.
The HPTP market faces several challenges. First, companies discover and bring to market new polymers only rarely. Most advances are developments of new grades or blends of existing polymers.
Second, only a few markets like aerospace and oil and gas can support the costs associated with the highest performing polymers. This limits growth for those materials at the high end, while improvements in ETP performance outlined earlier limit growth at the low end of the HPTP market.
Third, in addition to raw material cost, the high melting points that make HPTPs useful also make processing these materials more difficult and expensive, says Vicari.
On the other hand, opportunities include growth in the fiber reinforced composites market, efforts toward light weighting in aerospace and transportation, and 3-D printing. In the case of 3-D printing, intrinsically high HPTP performance can compensate for the lower quality of print versus molded plastics.
Fabrication Processes and Prospects
The thermoplastics fabrication industry has been evolving for decades, starting at least in 1926 with production of one of the first commercial inject molding machines by Ziegler and Eckert.
Typically, a fabrication process depends largely on the end-use application and required properties of the product. For example, applications in the automotive, manufacturing, and electronics sectors rely on injection molding and to a growing extent 3-D printing. Meanwhile, food and medical applications use thermoforming. Other technologies include blown film, compounding, and extrusion.
Potential entrants to the thermoplastic fabrication industry are likely to face various road-blocks along with a few opportunities depending on the type of technology they choose. In general, limiting factors according to the consultancy Nexant include: rising environmental concerns along with stringent government standards and regulations, higher power costs corresponding to increased throughput, and inter-polymer resin competition for injection molded applications.
As demand for plastics continue to rise above GDP figures globally, interest exists for using various materials and resin blends and developing machine processing versatility.
Over the last decade, interest in 3-D printing has increased resulting in growing competition among developers, says Nexant. Despite the potential market, limited materials and initial high capital investment costs associated with 3-D printers can be constraining factors. Regardless, 3-D printing offers the potential for ongoing technology advancement driven by printer performance, multi-material compatibly, and capability to yield finished products.
Compounding and Injection Molding
Polymer compounding is a standard practice, and critical for producing important co-polymer blends and composite material, says Vicari. For example, materials range from automobile tires, to glass fiber and carbon fiber composites are end products.
Compounds can almost always be designed to match a customer's needs better and, sometimes, at a lower cost compared to uncompounded resins or non-polymer based materials. Commonly, polymers are compounded for injection-molding applications with the addition of fibers or mineral fillers for reinforcement along with a range of several plastics additives such as impact modifiers, lubricants, thermal and ultraviolet (UV) stabilizers, pigments, and other materials.
The late 1940s and 1950s marked the emergence of injection molding as commercialization of large-volume thermoplastic resins took place. The process offers several advantages, including high production rates, low labor costs, and minimal scrap losses, among others. However, expensive molds and equipment are a disadvantage as molds often cannot be readily interchanged between different plastics.
Today, injection molding trends focus on more compact, reliable, easier operability of molding machines in addition to techniques to reduce capital and operation costs, says Nexant.
Simply put, the injection molding process involves placing plastic pellets in an extruder, allowing them to melt and take the shape of the mold cavity, where the melt is cooled and solidified. This process occurs in cycles, with each cycle ejecting a specified number of plastic parts per the number of cavities in the mold.
Lastly, the largest market for injection molded plastics is Asia, specifically China, with North America and Western Europe also presenting market opportunities. Currently, the automotive industry is driving demand as plastics weigh lighter than other materials that makeup a vehicle.

2016年12月7日星期三

LEDs Electrify Energy Management Efforts

The news about lighting is that there has been a revolution in solid-state lighting (SSL).
The extent of the revolution can be gleaned from Dr. Mark Rea, director of the Lighting Research Center (LRC) at Rensselaer Polytechnic Institute in New York. “There was a story line in lighting that you never displace a light source, you can only add a new one to the family,” he says. “LEDs have changed all that.”
Rea says that LEDs can compete with any lighting technology and do a better job. Because LEDs use so little power, the only cost-effective energy-saving lighting control strategy involves an occupancy sensor and a switch. Rea says that the old “ballistic” model of control that was used until a few years ago makes less economic sense. That model is based on the notion of launching an on/off or dimming message and seldom if ever looking for feedback.
“Now it’s two-way communication,” he says. “Controls are very different philosophically than they were even five years ago. In the future, the data acquired with new controls may be even more valuable than the energy those controls save.”
The minimum requirements for a two-way LED lighting control strategy are occupancy sensors, switches, dimmers, and networks.
Occupancy Sensors
The most common occupancy sensors use one of two technologies, or a combination of both.
Passive Infrared (PIR) senses the difference between the heat emitted by humans and background heat. A space is assumed to be occupied when the human heat is in motion. PIR requires a direct line of sight from the sensor to the occupant and is effective at detecting people walking into or out of a space. However, it only works for sensing major motion.
An ultrasonic sensor emits a signal that is reflected off all objects in an area. It detects motion by the Doppler effect, namely, the shift in frequency between the emitted and reflected signals if an object is in motion. Ultrasonic sensing can cover a space even if obstacles are present; it also can detect small motions. The problem is that it can be sensitive to motions caused by vibrations or air currents.
Dual technology uses both methods in combination for more reliable operation. In order to turn lights on automatically, both sensors must detect someone entering a room. After that, only one of the two is needed to keep the lights on while the area is occupied.
All three versions are available in ceiling-mount or wall-mount configurations.
Some energy codes such as ASHRAE 90.1 and New York City’s Energy Code LL48 mandate that lights be switched on manually when the first person enters a space and switched off automatically when the space is no longer occupied. However, it is sometimes acceptable to switch the lights on automatically to 50% of the normal illumination level when a person enters, requiring a manual switch for full illumination.
Using sensors to turn off lights or dim them when a space is unoccupied can save energy. Even in offices that are generally used all day, workers often arrive and leave at different times, rendering timer control counterproductive.
Different spaces, such as open plan offices, partitioned offices, private offices, utility rooms, cafeterias, conference rooms, hallways, and lobbies have different lighting needs.
In any particular application, it is necessary to know the exact specifications of a sensor to decide how it should be mounted for coverage of the targeted space. For example, in an open office with cubicles, ceiling-mounted sensors should probably be specified and arranged so that their coverage patterns overlap. (Coverage patterns are specified by the manufacturer for each device.) In such an office, it often is a good idea to choose a longer time for automatic off, say 15 to 30 minutes.
Dimming
There are a variety of methods for LED dimming. The following basic facts should be considered in their design and selection.
• Their light output is directly proportional to current.
• The relationship between LED voltage and current is nonlinear.
• Correlated Color Temperature (CCT) of an LED is specified at a given voltage/current operating point.
• Efficacy in lumens/watt is specified at a given operating point.
• Switching times between emitting and non-emitting states are on the order of magnitude of nanoseconds.
• LEDs have a significant startup inrush current — the peak to average current ratios can be as high as 30:1.
LEDs require drivers to convert AC line voltage to low voltage DC because LEDs are semiconductor diodes. An LED dimmer must be designed to be compatible with the driver and the driver with the specific LED.
When replacing an incandescent lamp with an LED, it is important to change the dimmer as well. Typical incandescent dimmers use so-called phase-cut technology, where the conduction angle of each half cycle of the input sine wave is varied to control average power. Attempting to use them with LEDs can lead to unstable performance.
Two common dimming technologies exist for LEDs: pulse-width modulation (PWM) and constant current reduction (CCR).
In PWM, the LED current is switched between zero and rated output. The ratio of on to off time is varied to control average luminous intensity. For example if on time is equal to off time then the intensity is 50% of maximum. The switching is done at a high enough frequency to avoid detectable flicker.
The advantage of PWM is that the LED is always switched on to rated current so color temperature and efficacy are constant across the dimming range.
One disadvantage of PWM is that the fast rise and fall times of the high frequency on/off pulses can generate electromagnetic interference (EMI). Performance issues also may arise if there are long wire runs between the dimmer and the light source. What’s more, the power source for the LED must be rated to handle the peak turn-on transients, not just average power.
CCR is an analog technique in which the voltage is held constant and the DC current is varied to control luminous intensity. It avoids the problems due to high frequency switching, but CCT and efficacy will vary with dimming percentage.
Retrofitting Lighting Controls
The challenge of how best to retrofit existing lighting and controls is a hot topic. For simple on/off control, either from a manual switch or an occupancy sensor, there’s no problem — simply replace the lamp. Screw-base lamp replacements typically have self-contained drivers. For fluorescents, however, the ballast must be removed from the fixture and replaced with an LED driver.
To reap the benefits of LED lighting systems, a network for communication and control needs to exist. Perhaps the most basic challenge for establishing a network is how to physically interconnect the lamps and controls. This can be more difficult for retrofits than for new construction.
Dimming is a good example. Because existing incandescent and fluorescent dimmers won’t do the job, the control function must be separate from the main power source. Although different approaches exist to solving this problem, they fall into two general categories: wired and wireless.
Power over Ethernet (PoE) has many advantages among the wired systems. PoE luminaires (code-talk for lighting fixtures) typically include drivers, occupancy sensors, and dimmers.
The system uses standard Ethernet category style cable to power the LEDs and also to transmit bi-directional control and sensor signals. Cat 5 cable can deliver up to 51 watts at 37 to 57 VDC. An upgrade to the IEEE 802.3 Standard is expected to be approved in 2017, which will increase the source power to 90 watts. The most efficient LED lighting fixtures for offices draw 30 – 35 watts.
The beauty of PoE is that it enables each luminaire to be configured as an Ethernet network node with a unique IP address so that a control system can interact with each luminaire individually. This opens a world of possibilities, especially in terms of the data acquired from the installed lighting.
The control and sensing information can be handled with a dedicated controller or with software that can be uploaded to a computer or mobile device. Sensor data, such as on/off/dimmed status, occupancy, and temperature can be shared via Ethernet with Building Automation Systems (BAS) and Enterprise Resource Planning (ERP) systems. It can also be shared with other enterprises via Internet, and may be sent to the cloud for processing, storage, and analytics. The system software also can be reconfigured for future upgrades or expansion as requirements expand and the technology evolves.
One issue with PoE, however, is that it requires running cable to each luminaire. The wiring does not have to be in conduit so it can be easily installed above a drop ceiling and typically does not require a licensed electrician. However, the work should be completed by someone certified for low-voltage wiring.
A wireless system is a much less costly option for retrofitting — one must invest in some hardware but not in labor-intensive wiring.
A number of wireless technologies are on the market. Some, such as Lutron’s, are proprietary, and can be used only with Lutron products. Others use standard protocols such as Z-Wave and ZigBee; a new one designed for home automation is called Thread.
One choice that might seem surprising is Bluetooth. Originally started in 1994, it is thought of primarily as a short-range cable-replacement technology. That is changing, however.
The next generation of Bluetooth, commonly known as Bluetooth Smart, was introduced in 2010 and may offer advantages compared to the other wireless technologies.
For one thing, it can transfer data at 1 Mbit/s, which is orders of magnitude faster than Z-Wave and Zigbee. There are advantages to the high data rate even though building automation does not require anything like that throughput.
It also enables lower latency, and thus better responsiveness, and also enables lower-duty cycle for transmitting information. That, and support for sleepy nodes, which only wake up when polled, extends battery life for sensors. Frequency hopping--the ability to automatically select the clearest among 40 different channels--is another possible advantage. A Bluetooth upgrade, to be released in the first half of 2017, is expected to double the speed and multiply the range by four.
A drawback of Bluetooth, however, is that the core specification does not support mesh networking. However, a number of companies have developed proprietary methods for using Bluetooth with mesh networks. And the Bluetooth Special Interest Group (SIG) is working on incorporating a standardized method for that.
Load Shedding
One important way that a lighting control system can contribute to improving energy supply is through load shedding. Indeed, utilities have a problem: although they can plan for average demand, there are unpredictable spikes. This might occur, for example, on an unusually hot summer day. The utility must therefore provide enough capacity to handle the peak loads even if they rarely occur.
One method of satisfying a spurt in demand is with “spinning reserve,” which is the ability of a power plant already connected to the power system to increase its output. The other option is non-spinning reserve — increasing output by bringing fast-start auxiliary generators on line or importing power from other systems.
However, if the demand spike is especially high, the response can generate instability on the grid. In any event, dealing with demand spikes is expensive for utilities. So, in addition to charging commercial customers for total consumption, utilities often add a surcharge for peak demand, typically the highest load for any 15-minute period during the billing cycle.
One solution may be to reduce spikes in demand. Networked LEDs can play a role in that, especially in control systems for microgrids. The grid can send a signal to its customers that their load should be reduced, so-called load shedding.
As Rea says, "you can't dim your copier or your computer but you can dim your lighting" if it’s part of a network.
A study published in the Lighting Research Technology Journal [Lighting Res. Technol., 37, 2 (2005) pp. 133 – 153] investigated the effects of reduced light levels on office workers. The researchers concluded that a reduction in luminance of 15% for paper tasks or 20% for computer tasks was undetectable for half of the participants in their study.
Further, most employees would accept dimming of 30% for paper tasks and 40% for computer tasks even though these levels are detectable. Notably, if the importance of load shedding is explained to employees, the acceptable dimming limits could be increased to as much as 40% for paper tasks and 50% for computer tasks.
Since the power used by LED lamps is approximately proportional to their illumination level, a 50% dimming of lighting would result in a 50% reduction in power demand.

2016年12月1日星期四

Fiber vs. Copper

Network system designers for projects that range from industrial manufacturing installations to data centers must consider a wide range of variables and anticipated demands when specifying data infrastructure highways.
Current technology provides two primary choices for design mediums.
Traditional twisted-pair copper wire and cabling
Fiber optic systems using both conventional and blown-cable design.
Industrial data centers are growing in size and complexity, and current estimates indicate 90% of active equipment will be replaced within five years. An efficient modular infrastructure, therefore, is required to adapt to fast-moving business requirements, increasing port and cable densities, and rapid deployment needs.
Infrastructure decisions made at the design stage often have a big impact on future expansions. Initial copper cable material and costs typically are 40-50% less than fiber solutions. As a result, they often are the design of choice whenever budgetary considerations are paramount. Growing demand for fiber, however, has resulted in more favorable pricing for new fiber optic system backbone installations.
Upfront costs must be weighed against long-term considerations of performance, future deployment costs to support business changes, space limitations, and modular flexibility for expanding bandwidth capabilities.
he Roads are Different
Twisted-pair copper wires are used to transmit electrical currents and drive data primarily through the proven media of Category 6 (CAT6) cabling. Cost of copper cabling per the IEEE 10 Gig Ethernet standard are currently projected at 40% of 10 Gig fiber networks for linkages of less than 100 meters. For many small- to mid-sized industrial projects, this can be an attractive approach. The copper cable industry is also pursuing CAT8 twisted-pair products to double the capacity of CAT6.
Drawbacks compared to fiber include greater power demands for high-speed data transmission via electrons rather than optical photons. Electrons moving at high-speed must overcome greater resistance and require more power to process signals. In addition, more expensive and complex environmental and cooling systems are needed to manage the heat byproduct of a copper-based data center.
Fiber optic cable consists of fiber strand bundles of optically pure glass tubes. These can be fabricated in a variety of configurations and transmission specifications. Typical fiber optic cables are built and rated for indoor or outdoor applications. They can be obtained in fiber ribbon counts up to 3,456 in a cable diameter less than 1.5 inches. Other common sizes include 12-, 24-, 48-, and 72-count bundles. These usually are encased in a polyethylene foam jacket.
Digital information is carried through fiber via photons in a precise pathway. As such, data can move at higher speeds and over greater distances versus copper. This is due to minimal signal loss during data transmission. As a result, fiber optic solutions often have a strong upside for designs encompassing a large geographic area.
Fiber is lighter than copper and has a smaller installation footprint. Large projects for corporate data centers or industrial feeds are typically carried out in both supported (cable trays, aerial hooks, and so on) as well as direct-feed configurations. For direct feed, specialty tubing can be used for buried or impact-resistant applications.
It’s All About the Bandwidth
From a manufacturing viewpoint, technology-driven advances have transformed the world into a grand scale “information system,” which some believe will trigger a new industrial revolution.
Physical objects now have embedded sensors and actuators that can be linked through wired and wireless networks via Internet protocols. Bandwidth demands will likely continue to grow as business and industrial enterprises move toward decentralized controls for production and supply chain logistics.
The past decade has seen strides in wireless technology for industrial networks. The reason is clear: With wireless communications, no direct cabling is required. However, growing bandwidth constraints with wireless (particularly video over wireless) have generated increasing demands for high-speed data highways.
In bandwidth performance, optical fiber has demonstrated a convincing upside versus copper cable due to the extremely high frequency ranges the optical glass is able to carry. With copper wire, signal strength diminishes at higher frequencies. CAT6 twisted-pair copper can be optimized for data rates of 500MHz over a distance of 100 meters. In comparison, multi-mode optical fiber would have a bandwidth in excess of 1000 MHz over the same distance.
In terms of Ethernet infrastructure standards, copper is a compatible choice up to 1.0 Gigabit applications. Optical fiber, however, has demonstrated transfer rates of 10 gigabits/second (Gbps). Movement toward 40-100 Gbps is a next threshold standard. A copper solution for a 10-Gigabit Ethernet is not a commercial reality at this time, making these applications only suitable for fiber.
A Case for Blown Optical Fiber?
Blown fiber technology originated with British Telecom in 1982 and provided a method to upgrade capacity or to change fiber type without prohibitive infrastructure costs.
In a ground-up blown fiber installation, “highways” of tube cable are installed in buried, riser, or plenum rated tubing constructions. The tube cables can then be populated with single- or multi-mode fibers in various configurations up to 48 fiber optic tubes per bundle. Precise geometric patterns manufactured into the fiber bundle jackets provide aerodynamic properties to the bundles. They then can be “blown” into the pathways via a customized nitrogen-driven air motor system.
Manual installations that previously required large crews and disruptions to existing operations can now be accomplished with a single pair of skilled workers trained in blowing techniques. Installation lengths of 4,000-6,000 feet are typical, with end-end completion times averaging 6-8 man-hours.
Blown optical fiber can be cost-effective in next-generation backbone upgrades within a given facility or in expansions to new locations. It can be blown in at a rate of 200 feet/minute. It also may be blown out at the same rate for additional or upgraded capacity.
Cost and complexity of future expansions are minimized without the labor-intensive steps of laying new conduit and disrupting operational areas. For the visionary designer, the up-front costs of installing a high-speed blown fiber network can more than pay for themselves by lowering the cost of future upgrades.
Copper or Fiber Infrastructure?
Arriving at the “perfect” solution for a cabling infrastructure is a difficult task that requires careful analysis of a large number of variables. These can include bandwidth demands, immunity to electrical interference, density of the connectivity space, flexibility/speed of reconfiguration, cost of electronics, device interface considerations, and budgetary constraints.
Copper’s initial low investment cost often makes it a viable alternative for compact and single-building data center needs where data transfer rates of less than 2 Gbps will meet current and future needs. Copper can be considered for relatively short data transmission distances within a defined building footprint where environmental challenges, temperature fluctuations, and electromagnetic interference have a minimum impact on signal integrity.
The initial higher costs of fiber-optic solutions can be offset with its much greater bandwidth properties and virtually error-free transmissions over large distances. Optical data center solutions provide generally simple and easy installations up to 75% faster than pulled copper cables. Fiber can also deliver considerable performance, time, and cost savings both for initial and for future infrastructure needs.

2016年11月23日星期三

A Twist for Surgical Robotics

Since 2000, California-based Intuitive Surgical has dominated the world of robot-assisted surgery. Its da Vinci system has logged well over two million surgical procedures.
Now, however, a Massachusetts startup called Medrobotics has launched a system of its own that features a design that could make robots a much more common fixture in operating rooms.
Rather than been being limited to straight, “line-of-sight” procedures as in other surgical robots, the company’s Flex Robotic System enables surgeons, with full HD visualization, to guide the device through the twists and turns of the body’s pathways. Result: minimally invasive surgery can be performed on sites that were difficult or even impossible to reach before.

All the key components of the Flex Robotic System can be set up for surgery in about 10 minutes, say surgeons. The disposable Flex Drive (blue component) attaches to the Flex Base and is inserted into the patient.
“With this system, we can now reach areas, such as the top of the voice box, which were very hard to access,” says Eugene Myers, M.D., a head and neck surgeon and professor at the University of Pittsburgh Medical School. “For example, this can mean the difference between a person losing his voice or retaining it.”
So far, the Flex Robotic System has received both U.S. Food and Drug Administration approval and Europe’s CE mark for transoral procedures of the mouth and throat. In October 2016, the system also gained European clearance for colorectal surgeries, expanding the range of operations. Costing about $1 million--around half the price of the latest da Vinci model--the system offers a relatively small footprint and a reported 10-minute setup time in operating rooms.
The device is gaining recognition, including a Best of Show award in the 2016 Medical Design Excellence Awards, and a 2016 Red Herring Top 100 Award, which recognizes technologies most likely to change people’s lives. Meanwhile, the technology has attracted about $150 million in investment for Medrobotics.
“The global market for robotics and other tools for minimally invasive surgery is expected to double by 2020,” says James Jordan, chief investment officer for Pittsburgh Life Sciences Greenhouse, an early investor in Medrobotics. “But that type of increase is not likely to occur without flexible robotics technology.”
Roots at Carnegie Mellon
The Medrobotics story began at Carnegie Mellon University’s Robotics Institute, which has spawned robotics innovations ranging from devices to inspect power plants and explore space environments to autonomous tractors and self-driving cars.
In the late 1990s, Robotics Institute engineering professor Howie Choset became fascinated with what has become known as “snake robots,” highly-articulated mechanisms with many degrees of freedom. These robotic snakes can thread their way through tightly packed areas in such applications as search and rescue missions and the manufacture of giant aircraft wings.
Along with fellow CMU engineering professor Alon Wolf and heart surgeon Marco Zenati, M.D., Choset developed the first prototype of a snake robot for minimally invasive surgery.
Together, the three men founded Medrobotics in 2005. They initially targeted the device for heart surgery, and it was used successfully in a limited investigational trial on three patients in Europe in 2010. It became apparent, however, that the high cost of developing a commercial device for cardiac applications, as well as the regulatory hurdles, dictated moving to other surgical procedures for faster commercial launch.
Noting that commercializing a surgical device is “not something that a robotics professor can accomplish with grants to the university,” Choset knew it was time to hand off his invention to Medrobotics. The company not only had to raise an enormous amount of capital for development and manufacturing infrastructure, but its engineers had to refine Choset’s design, both to pass muster with regulators and to meet the clinical demands of surgeons.
“The genius of Professor Choset’s design was his revolutionary motion system, specifically the relationship of the inner and outer mechanisms of the robot that allows the articulation to occur, just like a snake,” says Samuel Straface, a Ph.D. and former scientist who serves as president and CEO of Medrobotics. “Our engineers had to transfer that know-how into a viable product for clinicians.”
Follow the Leader
The Flex Robotics System that received FDA clearance in July 2015 combines the benefits of a laparoscope – a rigid, straight-viewing device used in minimally-invasive surgery -- with an endoscope, a flexible instrument for visual diagnosis.
Much of the device’s intellectual property revolves around the disposable Flex Drive, which snaps into a reusable base that contains system motors and controls. For transoral applications, the portion of the 1.5-lb drive inserted into the patient and containing the camera measures about 17 cm long, with a diameter that ranges from 17.5 mm to a maximum of 28 mm at the “distal end” where the surgery takes place.
The drive contains an external cable for electrical connections to the distally mounted, detachable camera and LEDs, plus an internal lumen that serves as a channel for tubing that carries fluid to a lens washer for the camera. The Flex Drive also contains accessory channels on either side of the camera to accommodate miniature instruments that the surgeon inserts during an operation.
At the heart of the Flex Drive’s technology is its motion system, which features a series of articulated segments or links. The design includes both inner and outer articulated segments, made of medical-grade polymer and arranged in a concentric mechanical assembly.
The control scheme causes each of these two assemblies to become semi-rigid or flexible by adjusting the tension on cables running through the segments. The outer assembly can also be articulated by varying the tension on its cables when part of it moves beyond the distal end of the inner assembly.
Using this ‘‘follow-the-leader’’ strategy and alternating flexible and rigid states, the drive moves to the surgical site under the direct control of the surgeon. Once positioned, the device can become rigid, forming a stable surgical platform.
A key distinguishing feature of this flexible robot is that it offers multiple degrees of freedom distributed across its length. When compared to the typical human arm’s 7 degrees of freedom, the device’s more than 30 degrees of freedom may enhance the physician’s ability to access, visualize, and perform surgeries.
And unlike other robotic systems, where the surgeon sits at a console several feet away from the operating table, the physician sits or stands next to the patient and uses a joy-stick-like controller on an adjacent console to move the inserted scope to the surgical site. A touchscreen monitor shows system operation, and a video display provides real-time, 3D visualization of the surgery.
Once the robot reaches the surgical site, the surgeon manipulates two other controllers, one in each hand, that control tiny instruments inserted through the Flex Drive’s two accessory channels. For example, the instrument controlled by the left hand might grasp a piece of tissue, while the instrument controlled by the right excises tissue for a biopsy. What’s more, the surgeon gets haptic feedback throughout the procedure.
Meeting the Engineering Challenge
Bringing the Flex Robotic System to its current state of the art has required a strategy of agile engineering, notes R&D director Russell Singleton, a Ph.D. electrical engineer whose career has included many years with startups. “The approach is to get the design out as fast as you can, then do more iterations as you go along, based on the needs of the application.”
The team also features a blend of engineering disciplines. “There is no mechanical engineering department, or electrical engineering department,” notes Singleton. “We are organized by project or program, and we encourage the cross-fertilization of ideas.”
Singleton says he prefers to buy as many components as he can off the shelf. However, the unique demands of this flexible system have dictated several homegrown designs. Since the second half of 2014, when the Flex Robotic System debuted in Europe for transoral applications, company engineers have made significant changes in the design of key components.
The “chip-on-tip” camera, which snaps onto the distal end of the Flex Drive, has evolved from a disposable, 800 x 800 pixel 2-D version to the current dual 1920 x 1080-pixel, high-definition 3D model that can be detached, sterilized and reused.
With a goal of creating a much more mobile and nimble platform, engineers have also reduced the size and weight of the Flex Drive to one tenth of what it was in 2014. In addition, based on the input of surgeons who work closely with the company, engineers have created a family of miniature surgical instruments that are disposable and measure just 3.5 mm in outer diameter. Some specialized instruments for operating on the voice box have diameters as small as 1.5 mm. Moreover, engineers have had to design these instruments to be flexible, since they too must navigate through channels that bend and twist to fit the body’s anatomy.
“Medrobotics’ engineers are not only dedicated and talented, but they are used to fast turnaround,” notes Marshall Strome, M.D., a professor and chairman emeritus of the Cleveland Clinic Head and Neck Institute. “I recall cases where I asked engineers in the morning to modify an instrument, and they had a new design to show me by the end of the day.”
The engineers also must tackle the tough task of developing motion control and path planning algorithms, a particularly challenging job in snake robot systems with their multiple degrees of freedom. Singleton cites MATLAB as a useful software package to model and simulate motion and path planning. Among other key tools: SOLIDWORKS for 3D CAD modeling of components and Altium for printed circuit board design.
The distal end of the Flex Drive includes the camera, LEDs, and accessory channels through which the surgeon inserts custom-designed instruments
Ramping Up for Production
On the manufacturing side, the Flex Robotics System must run a gauntlet of tests needed for regulatory approval. The battery of tests described in the company’s 510 (k) premarket submission to the FDA in 2015 include: electrical safety testing, biocompatibility, sterilization and packaging, animal studies, and human factors testing, among other things. As production ramps up, clean rooms, both at Raynham and offsite, will be used to assemble the camera, as well as disposable components, such as the Flex Drive and surgical instruments.
Michael Gallagher, the Medrobotics’ vice president of Operations, says the Flex Robotic System requires about 3,000 components, which presents challenges in materials and resource planning. Continuing design changes trigger additional adjustments in manufacturing.
For example, the move to 3-D vision requires changes in the Flex Base, such as replacing multiple circuit boards and machining of the frame. Meanwhile, field upgrades to systems installed in hospitals have to be managed. “It’s a very fast-paced environment, like being strapped to a rocket, and you have to be agile,” says Gallagher.
To manage such tasks, Gallagher cites QAD software as an important tool in materials and resource planning. In 2017, the company plans to introduce Omnify product lifecycle management software, a package that includes quality assurance features. On the test side, Gallagher points to software packages like LabVIEW TestStand and Euresys, both used in important tests of the Flex Drive’s camera lens.
Throughout the system’s development, Medrobotics has embedded manufacturing engineers and supply chain managers alongside design engineers on project teams, says Gallagher, both to optimize components for production and test, as well as to ensure better resource planning. For example, engineers configured complex boards on camera assemblies for more efficient testing on bed-of-nails fixtures. To reduce costs and simplify production, the engineering team completely redesigned the Flex Drive, replacing custom-machined components with injection molded parts.
Growth Spurt
As Medrobotics positions its system for new surgical applications, engineers will need to make further design and manufacturing changes, including the size and shape of the Flex Drive, new types of surgical instruments, the configuration of special retractors and fixtures through which the drive enters a patient, and motion control and path planning.
Still, the company is confident that it will play a major role in the growth of surgical robots in the years ahead. Nearly every day, surgeons can be found at the Raynham facility, where four complete systems have been set up to acquaint medical personnel with the technology. Surgeons, such as Dr. Umamaheswar Duvvuri of the University of Pittsburgh School of Medicine and Dr. David Goldenberg of the Penn State College of Medicine, are demonstrating the system to interested surgeons in surgical settings.
In addition, surgeons in Europe are helping to prove the technology’s viability. In July 2016, Medrobotics released the results of clinical trials involving 80 patients with throat lesions at four hospitals in Germany and Belgium. Using the Flex Robotic System, surgeons accessed the target area in 94% of the cases. No device-related adverse events were reported
Surgeons familiar with the system point to such benefits as being able to see behind anatomical structures, such as organs, to detect hidden tumors. They add that flexible robotics also helps them perform more precise surgery, which can mean reduced need for chemo and radiation therapies in cancer patients.
Encouraged by the response from the medical community, CEO Straface expects to see several hundred hospitals worldwide using the Flex Robotic system within the next three years. Beyond transoral and colorectal applications, he sees the device being used in an expanding array of procedures, such as gynecological surgery. Eventually, he believes surgeons will choose flexible robotics for operations on the beating heart.
With an intellectual property portfolio that includes 200 patents issued or pending, Straface says his company has achieved “hard won” leadership in the field of flexible robotics for medical applications.
“Analysts are predicting a mammoth rise in the adoption of robots for surgery,” says Straface. But for that to happen, he says that a paradigm shift must occur so that surgeons “are no longer limited to line-of-sight robotic technology.”

2016年11月15日星期二

Engineer's Guide to Corrosion: Part 3

Materials may range from mild steel in condensate/feedwater systems and waterwall tubes to high-alloy steels in superheater/reheaters and turbines. Alloying elements obviously have a strong impact on corrosion mechanisms that can affect the various steels. In Part 3 we will examine corrosion issues, and include a discussion of methods of control. Corrosion is quite a complex science, with steam generation corrosion being just one part of a huge mosaic.
Mild Carbon Steel Corrosion and Control - Feedwater
Iron is an amphoteric metal, meaning that both low and very high pH solutions will cause corrosion. Part 1 of this series outlined the basic corrosion mechanism in acid. At the temperatures common to the condensate/feedwater system and steam generator, general corrosion is minimized at a mildly basic pH.
For typical heat recovery steam generators (HRSGs) at combined-cycle plants, the recommended feedwater pH range is 9.6 to 10 (as measured at 25oC). The most common chemical used for feedwater pH control is ammonia (NH3), a weak base that generates hydroxyl ion (OH-).
Because feedwater is pure (or always should be), a common method to control ammonia feed is from specific conductivity (S.C.) readings, as in pure water a direct correlation exists between S.C. and pH, with S.C. being much easier to measure.
Beyond pH, however, another issue has arisen regarding flow accelerated corrosion (FAC). A future article will cover that topic, but it is worth mentioning here that in the condensate/feedwater systems of modern HRSGs (virtually none of which contain copper alloys) a small amount of dissolved oxygen (5 to 10 ppb in the feedwater) is required to generate the proper protective oxide surface on carbon steel. As we will review in that future article, FAC is an extremely serious issue.
From a materials aspect regarding FAC control, fabrication of FAC-susceptible components with P11/T11 or P22/T22 (1¼ and 2¼ chromium concentration, respectively) should be a standard consideration for all new HRSGs.
Yet another important subject, which requires a separate article for full discussion, is steam generator corrosion control when the unit is off-line. In these cases, oxygen is an enemy, and air in-leakage to cold units standing full of water can cause damaging corrosion.
A technology for corrosion protection that has been known for decades and is now re-emerging, is filming amine use. Amines are organic compounds with one or more ammonia groups attached. (Some small chain amines, known as neutralizing amines, are used in place of or along with ammonia as pH-conditioning agents.)
The amine groups and perhaps other functional groups on the chain attach to metals, while the hydrophilic organic chain establishes a barrier to water. In recent years, some positive results have been seen in full-scale tests of these products. Potential drawbacks still remain, including organic breakdown in high-temperature steam. Organizations such as the Electric Power Research Institute (EPRI) continue research into these products.
Boiler Water Corrosion Control
The ammonia (or perhaps neutralizing amine) used for condensate/feedwater conditioning can provide enough alkalinity to also protect the steam generator proper, as long as impurity intrusion is prevented. But condenser tube leaks will introduce a variety of contaminants to the condensate and the steam generator, including scale-forming ions and also chloride and sulfate, among others. The latter, and chloride in particular, can generate acids that quickly consume ammonia and leave the steam generator in a vulnerable condition.
So, basically from the advent of high-pressure power production, most steam generators (and here we are only examining drum-type boilers and HRSGs) are treated with an additional pH-conditioning agent. Most common has been tri-sodium phosphate (Na3PO4), which generates alkalinity as follows:
Phosphate will also react with hardness ions to prevent them from forming hard scale in the steam generator, but unfortunately, TSP has some drawbacks that limit its effectiveness. One of the most notable is its reverse solubility at high temperatures.
Tri-sodium phosphate solubility vs. temperature.
The compound cannot be dosed liberally, as most of it precipitates directly on boiler internals. Compounding this issue is that the TSP can react directly with the boiler tube material to form iron-sodium phosphate compounds. For these reasons, chemists at some plants have adopted straight caustic (NaOH) feed as the method to establish boiler water alkalinity. However, due to iron’s amphoteric nature, the caustic concentration must be limited to a maximum of 1 part-per-million (ppm).
At this point, another factor must be considered. Even in systems with good feedwater chemistry, iron oxide corrosion products still are generated and travel to the steam generator. At the high boiler temperatures ,the particulates precipitate, generally on the hot side of the tubes. Deposition sets up porous deposits, where water can penetrate the deposits through various channels.
An illustration of wick boiling.
As the water approaches the tube surface, temperatures increase. The water boils off, leaving impurities behind (wick boiling). The contaminants can concentrate to very high levels. One of the most frightening reactions possible is shown in the following equation:
As can be seen, a product of this reaction is hydrochloric acid. While HCl may cause corrosion in and of itself, the compound will concentrate under deposits, where the reaction of the acid with iron generates hydrogen, which in turn can lead to hydrogen damage of the tubes.
Under-deposit acid formation. Illustration courtesy of Ray Post, ChemTreat.
In this mechanism, atomic hydrogen penetrates into the metal where it reacts with carbon atoms in the steel to generate methane (CH4). Formation of the gaseous methane and hydrogen molecules causes cracking in the steel, greatly compromising its strength. Hydrogen damage is troublesome because it cannot be easily detected. After hydrogen damage has occurred, plant staff may replace tubes only to find that other tubes continue to rupture.
A tube failure due to hydrogen damage. Notice the thick-lipped fracture, indicative of failure with little metal loss.
For many decades, hydrogen damage has been at or near the top of the list of high-pressure steam generator corrosion mechanisms. What power plant personnel often fail to recognize is that even small condenser tube leaks (or other impurity ingress), if they are chronic, can allow repetitive under-deposit corrosion and hydrogen damage.
Under-deposit corrosion is not limited to low pH conditions. Per the wick boiling example outlined above, another compound that can concentrate within deposits is caustic. Concentrations may rise to levels many times that in the bulk boiler water. The concentrated NaOH attacks the boiler metal and its protective magnetite (Fe3O4) layer via the following reactions:
The potential for under-deposit caustic gouging is the primary reason why the free caustic concentration in the boiler water is typically limited to 1 ppm.
Key Takeaways
The reader will have observed a number of important items in this discussion, but several takeaways should be emphasized.
• The amphoteric nature of iron requires pH control within a rather narrow window.
• Deposits that accumulate within the steam generator greatly influence the corrosion potential due to their ability to allow concentration of impurities. That is why regular chemical cleaning is important, although this can be a difficult task in HRSGs with the complex evaporator tubing.
• Proper feedwater chemistry control is required not only to prevent FAC, but also to minimize carryover of particulates to the steam generator.
• Units should not be operated with condenser tube leaks unless the condensate system is equipped with a condensate polisher.
A Note on Steam Chemistry
Space did not permit a discussion about steam chemistry, but it is highly important. In fact, many of the guidelines for boiler water chemistry have developed around the prevention of impurity carryover to the steam system and turbine. The corrosion mechanisms that certain impurities can induce include pitting, stress corrosion cracking, and corrosion fatigue.
References
1. Comprehensive Cycle Chemistry Guidelines for Combined Cycle/Heat Recovery Steam Generators (HRSGs). EPRI, Palo Alto, CA: 2013. 3002001381.
2. Buecker, B., and S. Shulder, “Critical Water/Steam Chemistry Concepts for HRSGs”; webinar for Energy-Tech University, Oct. 25-26, 2016.
3. B. Buecker, “Condenser Chemistry and Performance Monitoring: A Critical Necessity for Reliable Steam Plant Operation”; paper IWC 99-10 from the Proceedings of the 60th Annual International Water Conference, October 18-20, 1999, Pittsburgh, Penn.

2016年11月3日星期四

Engineer's Guide to Corrosion: Part 1

Part 1 of this series provided an overview of a number of the most common corrosion mechanisms that can occur in steam generators at power and industrial plants. Here, in Part 2, we focus on the most common materials in these systems because an understanding of material constituents is needed for subsequent corrosion study.
The universal material for condensate/feedwater piping and boiler waterwall tubes is mild carbon steel, as this material often offers the best combination of strength and price. A transition from the mild steels to higher-strength alloys, including the ferritic and austenitic steels, is needed in superheaters and reheaters where temperatures are higher and the density of the cooling medium is much lower. The table outlines the chemical composition of the most common materials employed for steam generator fabrication.
Common steam generator materials. Adapted from Ref. 1
In considering materials for condenser and feedwater heater tubes, current practice calls for all-ferrous alloys, usually stainless steel. Some older power-generating units still have heaters and/or condensers with copper alloy tubes. Most common are Admiralty metal (70% copper, 29% zinc, and 1% other alloys, primarily arsenic or phosphorous), and the copper-nickel alloys, notably 90-10 and 70-30.
Turbine rotors and blades are composed of high alloy steels. Typical for turbine blades is 403 SS, which contains 12% to 13% chromium. A popular material for modern supercritical plant turbine blades is type 422, 12 Cr ferritic steel. HP turbine rotors are often ferritic steels containing 1% to 13% chromium and some nickel, molybdenum, and vanadium.
Alloying Elements in Steel
At first glance, the numerous elements that make up the steels outlined in the table may be bewildering. Most are alloying elements to improve various properties (toughness, resistance to heat, and so on) of the steel, while a few are impurities whose content must be carefully controlled.
Carbon: Carbon is the alloying element that defines steel, as opposed to plain iron. As stated in Reference 2, it is the “most important alloying element in steel.” Carbon increases the tensile and yield strengths of steel, although it also lowers the ductility and toughness. As is evident in the table, a relatively small percentage of carbon is necessary, and indeed will dissolve in steel, for alloying purposes. Above about 2% concentration, the element forms carbon nodules in the metal; these are the cast irons, whose ductility and strength decrease with increasing carbon content.
Chromium: Chromium is the element that in 12% concentration or greater imparts the “stainless” quality to stainless steels. The chromium causes steel to form a layer of chromium oxide that protects the steel from its environment (but with cautions to that chemistry as will be outlined later). However, even at lower concentrations than 12% chromium benefits also may be derived, including greater strength and toughness, and improved resistance to creep and oxidation at high temperatures. (Creep is the deformation of steel at elevated temperatures.) Also, even small amounts of chromium provide increased resistance of steel to flow-accelerated corrosion (FAC). [3] Both single-phase and two-phase FAC have plagued many steam generating systems where some failures have caused fatalities.
Nickel: Nickel improves toughness and in certain cases corrosion resistance of steel. But, it is most important when alloyed at 8% concentration or higher, as at this concentration and above, nickel causes the steel to assume the austenitic structure.
A brief discussion of the two most common steel crystalline structures will aid in understanding. When metals are fabricated from molten material, the crystalline structures that form grow into grains as the material solidifies.
Illustration of metal grains.
The body-centered cubic structure. Note the atom centered within the cell.
The face-centered cubic structure.
Grain size may vary depending on the metal or alloy and fabrication methods, for example, fast cooling, slow cooling, subsequent heat treatment. We will examine grain behavior later with regard to some special corrosion issues. For example, grain boundaries often serve as corrosion sites.
The crystalline structure of the metal within grains is also important. For the mild steels and even those that contain significant chromium, but without nickel, each unit cell of the crystal has a body-centered cubic structure, as shown.
These steels are defined as ferritic. If the steel is heated to a high temperature, this structure morphs into a face-centered cubic structure known as austenite.
In some cases, austenite offers better mechanical properties than ferrite. The addition of 8% or greater nickel to steel as an alloying element allows the steel to remain as austenite at room temperatures and below.
The table may now become more understandable with regard to the crystal structure of each of the steels. A third structure, martensite, can also be produced by special treatment, but it will not be considered here. However, note that a relatively new type of steel, the duplex alloys, are being used in some applications. These consist of a combination of ferrite and austenite. The duplex alloys offer some advantages, but at least one spectacular failure mechanism is known, in an air pollution control application, which will be outlined in a later article.
Molybdenum: Molybdenum increases steel’s strength and resistance to wear, among other benefits, but also contributes to high temperature strength.
Others: Elements like silicon and manganese can help protect steels against oxidation and problems related with residual sulfur in the steel, respectively. Phosphorus in low concentrations can increase steel hardness, but becomes troublesome in higher concentrations than those shown in the table. Other trace elements that may be beneficial, depending upon the alloy, include aluminum, boron, columbium, copper, nitrogen, niobium, and tungsten.
Some Mechanical Steel Failure Methods
The components of a high-pressure steam generator are subject to a number of mechanical stresses, including tensile, compression, and others. These often occur at attachments such as supporting infrastructure and tubing connections at headers, drums, and similar locations. Stressed locations are often greatly influenced by the high temperatures during normal operation and cyclic issues when units are reduced in load or come off-line, as has become much more common in recent years.
Several stresses will come into focus in later corrosion discussions. For example, some components that cycle frequently may fail from simple fatigue. Tube attachments are classic locations for fatigue. Corrosion fatigue, as the name implies, involves structural weakening by fatigue that then allows corrosive agents to accumulate in the weakened area, usually a small crack, and then increase the corrosion rate. Stress corrosion cracking (SCC) can occur in components; turbine blades are a prime example, in which normal operation induces stresses that allow corrosion to occur.
Another common phenomenon in steam generators is the deformation known as creep. Materials begin to lose strength at elevated temperatures; the ultimate example may be if the metal is taken to its melting point. But even at temperatures well below the melting point, the combination of heat and stress will cause metals to deform.
Very common to high-pressure steam units, and especially in superheaters and reheaters, is long-term creep. Over time, the material may deform or elongate to the point of failure. Corrosion often may play a part in long-term creep, particularly if corrosion products inhibit heat transfer. At the opposite extreme, if corrosion or some other mechanism causes a partial or full blockage of fluid flow in high-temperature water or steam-bearing tubes, short-term overheat may be the result.
Rapid tube failure. Note the thin-lipped structure at the failure.
Such dramatic failures may be corrosion/chemistry related, but also may be induced by operational upsets. In coal-fired power plants, firing a unit too rapidly is one possibility. Another, which has occurred over the years, is neglecting drum level during operation and allowing portions of waterwall tubes to go dry.
In Part 3, we will examine primary corrosion mechanisms and their control in greater detail. These issues are too-often overlooked, sometimes to the point of a catastrophic failure.
References
1. C. Bozzuto, Ed., Clean Combustion Technologies, Fifth Edition; Alstom, 2009, Windsor, CT.
2. J.B. Kitto and S.C. Stultz, Eds., Steam Its Generation and Use, 41st Edition; Babcock & Wilcox, 2005, Barberton, OH.
3. B. Buecker, “Flow Accelerated Corrosion – A Critical but Often Not Well Recognized Steam Generator Issue”; Engineering 360, August 2016.

2016年10月26日星期三

Wirelessly Powered Drone Technology Demonstrated

 Scientists have demonstrated an efficient method for wirelessly transferring power to a drone while it is flying. The technology could in theory allow flying drones to stay airborne indefinitely simply by hovering over a ground support vehicle to recharge—potentially opening up new industrial applications.
The technology uses inductive coupling, a concept initially demonstrated by inventor Nikola Tesla over a century ago. Two copper coils are tuned into one another electronically, which enables the wireless exchange of power at a certain frequency.
To demonstrate the technology, engineers from Imperial College London removed the battery from an off-the-shelf mini-drone and demonstrated that they could wirelessly transfer power to it to keep it aloft. They believe their demonstration is the first to show how wireless charging via inductive coupling can be carried out efficiently with a flying object such as a drone.
To achieve the feat, after removing the battery from the 12 cm-diameter quadcopter drone, the researchers altered its electronics and made a copper foil ring to act as a receiving antenna encircling the drone's casing. On the ground, a transmitter device made out of a circuit board was connected to the electronics and a power source, creating a magnetic field.
The drone's electronics were calibrated at the frequency of the magnetic field. When it flew into the magnetic field, an alternating-current voltage was induced in the receiving antenna, and the drone's electronics converted it efficiently into a direct-current voltage to power it.
The technology is still in its experimental stage, the researchers say, as the drone can currently fly only 10 cm above the magnetic field transmission source. The team estimates they are one year away from a commercially feasible product.
The use of small drones for commercial purposes—for surveillance, reconnaissance missions and search-and-rescue operations—is growing rapidly. However, the distance that a drone can travel and the duration it can stay in the air are limited by the availability of power and recharging requirements. Wireless power-transfer technology may solve this, the team says.
"Imagine using a drone to wirelessly transmit power to sensors on things such as bridges to monitor their structural integrity," says Paul Mitcheson, professor in the Department of Electrical and Electronic Engineering. "This would cut out humans having to reach these difficult-to-access places to recharge them."
The technology could potentially allow flying drones to stay airborne indefinitely simply by hovering over a ground support vehicle to recharge. Image credit: Imperial College London.