Specialized Thermoplastics HPTPs

High-performance thermoplastics (HPTPs) are specialized polymers used for demanding applications, largely due to their thermal resistance in comparison to engineering thermoplastics (ETPs) such as nylons and polycarbonates (PC). One tradeoff, however, is that their prices are often higher.
In general, economics play a major role in product design, says Anthony Vicari, Advanced Materials Lead Analyst at Lux Research.
“When designing a product, companies will choose the cheapest material that meets their needs; an expensive HPTP like Polyetherimide (PEI) or Polyetheretherketone (PEEK) is not freely chosen if a cheaper ETP like PC can suffice.”
A thermoplastic is a plastic type made from polymer resins that becomes hard when cooled and a homogenized liquid when heated. However, when frozen, it becomes glass-like and easily fractures. A variety of thermoplastics exist, with each type varying in crystalline structure and density.
HPTPs are also typically defined as high-temperature thermoplastics with a melting point above 150 °C.
“A high melting point for a thermoplastic correlates with other important characteristics such as mechanical strength and chemical resistance/inertness,” says Vicari.
Commodity thermoplastics, ETPs, and HPTPs fall into a structure often depicted as a “thermoplastic performance pyramid” with HPTPs at the top. Vicari says that, in general, the higher the performance of the polymer, the higher its price and location on the pyramid. The pyramid itself is divided into four stages with each stage split into two categories: amorphous and semi-crystalline. The four stages of plastics from bottom to top are: commodity, engineering, high temperature, and extreme temperature.
Over time, advances in polymer quality, polymer processing, and product design are improving the performance of ETPs. This may blur the distinction between the higher performing ETPs like PC and the lower performing HPTPs like Polysulfone (PSU), making substitutions possible in some cases, says Vicari. The average yearly global growth rates for the major ETPs are around 4%.
Thermoplastics and Market Demands
“Demand for polymers in general continues to rise, and with that the demand for HPTPs,” Vicari says. “Across a broad spectrum of applications, companies continue to demand higher performing products, requiring higher performing materials.”
For example, PEEK is a linear thermoplastic that can be processed in a temperature range of 350-420°C with excellent chemical and temperature resistance and mechanical strength. It has applications in environments such as nuclear and chemical plants, oil and geothermal wells, and high-pressure steam valves. Major manufacturers include Victrex, Solvay, Evonik, and Arkema.
Key HPTP markets include medical, aerospace, electronics, automotive, oil and gas (O&G), and, more recently, 3-D printing or additive manufacturing.
Over the past three decades, the use of plastics in automotive applications has grown due to their light weight and, therefore, superior fuel efficiency in comparison to metal. HPTPs and ETPs also offer more design flexibility, helping to reduce development time and cost. Current trends are mainly influenced by cost and weight reduction, environmental considerations, and recycling from end-of-life vehicles.
At present, only some ETPs like polycarbonate can be recycled. “In general, recycling results in some loss of material properties compared to virgin material, so recycled plastics are used in less demanding applications,” says Vicari.
The HPTP market faces several challenges. First, companies discover and bring to market new polymers only rarely. Most advances are developments of new grades or blends of existing polymers.
Second, only a few markets like aerospace and oil and gas can support the costs associated with the highest performing polymers. This limits growth for those materials at the high end, while improvements in ETP performance outlined earlier limit growth at the low end of the HPTP market.
Third, in addition to raw material cost, the high melting points that make HPTPs useful also make processing these materials more difficult and expensive, says Vicari.
On the other hand, opportunities include growth in the fiber reinforced composites market, efforts toward light weighting in aerospace and transportation, and 3-D printing. In the case of 3-D printing, intrinsically high HPTP performance can compensate for the lower quality of print versus molded plastics.
Fabrication Processes and Prospects
The thermoplastics fabrication industry has been evolving for decades, starting at least in 1926 with production of one of the first commercial inject molding machines by Ziegler and Eckert.
Typically, a fabrication process depends largely on the end-use application and required properties of the product. For example, applications in the automotive, manufacturing, and electronics sectors rely on injection molding and to a growing extent 3-D printing. Meanwhile, food and medical applications use thermoforming. Other technologies include blown film, compounding, and extrusion.
Potential entrants to the thermoplastic fabrication industry are likely to face various road-blocks along with a few opportunities depending on the type of technology they choose. In general, limiting factors according to the consultancy Nexant include: rising environmental concerns along with stringent government standards and regulations, higher power costs corresponding to increased throughput, and inter-polymer resin competition for injection molded applications.
As demand for plastics continue to rise above GDP figures globally, interest exists for using various materials and resin blends and developing machine processing versatility.
Over the last decade, interest in 3-D printing has increased resulting in growing competition among developers, says Nexant. Despite the potential market, limited materials and initial high capital investment costs associated with 3-D printers can be constraining factors. Regardless, 3-D printing offers the potential for ongoing technology advancement driven by printer performance, multi-material compatibly, and capability to yield finished products.
Compounding and Injection Molding
Polymer compounding is a standard practice, and critical for producing important co-polymer blends and composite material, says Vicari. For example, materials range from automobile tires, to glass fiber and carbon fiber composites are end products.
Compounds can almost always be designed to match a customer's needs better and, sometimes, at a lower cost compared to uncompounded resins or non-polymer based materials. Commonly, polymers are compounded for injection-molding applications with the addition of fibers or mineral fillers for reinforcement along with a range of several plastics additives such as impact modifiers, lubricants, thermal and ultraviolet (UV) stabilizers, pigments, and other materials.
The late 1940s and 1950s marked the emergence of injection molding as commercialization of large-volume thermoplastic resins took place. The process offers several advantages, including high production rates, low labor costs, and minimal scrap losses, among others. However, expensive molds and equipment are a disadvantage as molds often cannot be readily interchanged between different plastics.
Today, injection molding trends focus on more compact, reliable, easier operability of molding machines in addition to techniques to reduce capital and operation costs, says Nexant.
Simply put, the injection molding process involves placing plastic pellets in an extruder, allowing them to melt and take the shape of the mold cavity, where the melt is cooled and solidified. This process occurs in cycles, with each cycle ejecting a specified number of plastic parts per the number of cavities in the mold.
Lastly, the largest market for injection molded plastics is Asia, specifically China, with North America and Western Europe also presenting market opportunities. Currently, the automotive industry is driving demand as plastics weigh lighter than other materials that makeup a vehicle.


LEDs Electrify Energy Management Efforts

The news about lighting is that there has been a revolution in solid-state lighting (SSL).
The extent of the revolution can be gleaned from Dr. Mark Rea, director of the Lighting Research Center (LRC) at Rensselaer Polytechnic Institute in New York. “There was a story line in lighting that you never displace a light source, you can only add a new one to the family,” he says. “LEDs have changed all that.”
Rea says that LEDs can compete with any lighting technology and do a better job. Because LEDs use so little power, the only cost-effective energy-saving lighting control strategy involves an occupancy sensor and a switch. Rea says that the old “ballistic” model of control that was used until a few years ago makes less economic sense. That model is based on the notion of launching an on/off or dimming message and seldom if ever looking for feedback.
“Now it’s two-way communication,” he says. “Controls are very different philosophically than they were even five years ago. In the future, the data acquired with new controls may be even more valuable than the energy those controls save.”
The minimum requirements for a two-way LED lighting control strategy are occupancy sensors, switches, dimmers, and networks.
Occupancy Sensors
The most common occupancy sensors use one of two technologies, or a combination of both.
Passive Infrared (PIR) senses the difference between the heat emitted by humans and background heat. A space is assumed to be occupied when the human heat is in motion. PIR requires a direct line of sight from the sensor to the occupant and is effective at detecting people walking into or out of a space. However, it only works for sensing major motion.
An ultrasonic sensor emits a signal that is reflected off all objects in an area. It detects motion by the Doppler effect, namely, the shift in frequency between the emitted and reflected signals if an object is in motion. Ultrasonic sensing can cover a space even if obstacles are present; it also can detect small motions. The problem is that it can be sensitive to motions caused by vibrations or air currents.
Dual technology uses both methods in combination for more reliable operation. In order to turn lights on automatically, both sensors must detect someone entering a room. After that, only one of the two is needed to keep the lights on while the area is occupied.
All three versions are available in ceiling-mount or wall-mount configurations.
Some energy codes such as ASHRAE 90.1 and New York City’s Energy Code LL48 mandate that lights be switched on manually when the first person enters a space and switched off automatically when the space is no longer occupied. However, it is sometimes acceptable to switch the lights on automatically to 50% of the normal illumination level when a person enters, requiring a manual switch for full illumination.
Using sensors to turn off lights or dim them when a space is unoccupied can save energy. Even in offices that are generally used all day, workers often arrive and leave at different times, rendering timer control counterproductive.
Different spaces, such as open plan offices, partitioned offices, private offices, utility rooms, cafeterias, conference rooms, hallways, and lobbies have different lighting needs.
In any particular application, it is necessary to know the exact specifications of a sensor to decide how it should be mounted for coverage of the targeted space. For example, in an open office with cubicles, ceiling-mounted sensors should probably be specified and arranged so that their coverage patterns overlap. (Coverage patterns are specified by the manufacturer for each device.) In such an office, it often is a good idea to choose a longer time for automatic off, say 15 to 30 minutes.
There are a variety of methods for LED dimming. The following basic facts should be considered in their design and selection.
• Their light output is directly proportional to current.
• The relationship between LED voltage and current is nonlinear.
• Correlated Color Temperature (CCT) of an LED is specified at a given voltage/current operating point.
• Efficacy in lumens/watt is specified at a given operating point.
• Switching times between emitting and non-emitting states are on the order of magnitude of nanoseconds.
• LEDs have a significant startup inrush current — the peak to average current ratios can be as high as 30:1.
LEDs require drivers to convert AC line voltage to low voltage DC because LEDs are semiconductor diodes. An LED dimmer must be designed to be compatible with the driver and the driver with the specific LED.
When replacing an incandescent lamp with an LED, it is important to change the dimmer as well. Typical incandescent dimmers use so-called phase-cut technology, where the conduction angle of each half cycle of the input sine wave is varied to control average power. Attempting to use them with LEDs can lead to unstable performance.
Two common dimming technologies exist for LEDs: pulse-width modulation (PWM) and constant current reduction (CCR).
In PWM, the LED current is switched between zero and rated output. The ratio of on to off time is varied to control average luminous intensity. For example if on time is equal to off time then the intensity is 50% of maximum. The switching is done at a high enough frequency to avoid detectable flicker.
The advantage of PWM is that the LED is always switched on to rated current so color temperature and efficacy are constant across the dimming range.
One disadvantage of PWM is that the fast rise and fall times of the high frequency on/off pulses can generate electromagnetic interference (EMI). Performance issues also may arise if there are long wire runs between the dimmer and the light source. What’s more, the power source for the LED must be rated to handle the peak turn-on transients, not just average power.
CCR is an analog technique in which the voltage is held constant and the DC current is varied to control luminous intensity. It avoids the problems due to high frequency switching, but CCT and efficacy will vary with dimming percentage.
Retrofitting Lighting Controls
The challenge of how best to retrofit existing lighting and controls is a hot topic. For simple on/off control, either from a manual switch or an occupancy sensor, there’s no problem — simply replace the lamp. Screw-base lamp replacements typically have self-contained drivers. For fluorescents, however, the ballast must be removed from the fixture and replaced with an LED driver.
To reap the benefits of LED lighting systems, a network for communication and control needs to exist. Perhaps the most basic challenge for establishing a network is how to physically interconnect the lamps and controls. This can be more difficult for retrofits than for new construction.
Dimming is a good example. Because existing incandescent and fluorescent dimmers won’t do the job, the control function must be separate from the main power source. Although different approaches exist to solving this problem, they fall into two general categories: wired and wireless.
Power over Ethernet (PoE) has many advantages among the wired systems. PoE luminaires (code-talk for lighting fixtures) typically include drivers, occupancy sensors, and dimmers.
The system uses standard Ethernet category style cable to power the LEDs and also to transmit bi-directional control and sensor signals. Cat 5 cable can deliver up to 51 watts at 37 to 57 VDC. An upgrade to the IEEE 802.3 Standard is expected to be approved in 2017, which will increase the source power to 90 watts. The most efficient LED lighting fixtures for offices draw 30 – 35 watts.
The beauty of PoE is that it enables each luminaire to be configured as an Ethernet network node with a unique IP address so that a control system can interact with each luminaire individually. This opens a world of possibilities, especially in terms of the data acquired from the installed lighting.
The control and sensing information can be handled with a dedicated controller or with software that can be uploaded to a computer or mobile device. Sensor data, such as on/off/dimmed status, occupancy, and temperature can be shared via Ethernet with Building Automation Systems (BAS) and Enterprise Resource Planning (ERP) systems. It can also be shared with other enterprises via Internet, and may be sent to the cloud for processing, storage, and analytics. The system software also can be reconfigured for future upgrades or expansion as requirements expand and the technology evolves.
One issue with PoE, however, is that it requires running cable to each luminaire. The wiring does not have to be in conduit so it can be easily installed above a drop ceiling and typically does not require a licensed electrician. However, the work should be completed by someone certified for low-voltage wiring.
A wireless system is a much less costly option for retrofitting — one must invest in some hardware but not in labor-intensive wiring.
A number of wireless technologies are on the market. Some, such as Lutron’s, are proprietary, and can be used only with Lutron products. Others use standard protocols such as Z-Wave and ZigBee; a new one designed for home automation is called Thread.
One choice that might seem surprising is Bluetooth. Originally started in 1994, it is thought of primarily as a short-range cable-replacement technology. That is changing, however.
The next generation of Bluetooth, commonly known as Bluetooth Smart, was introduced in 2010 and may offer advantages compared to the other wireless technologies.
For one thing, it can transfer data at 1 Mbit/s, which is orders of magnitude faster than Z-Wave and Zigbee. There are advantages to the high data rate even though building automation does not require anything like that throughput.
It also enables lower latency, and thus better responsiveness, and also enables lower-duty cycle for transmitting information. That, and support for sleepy nodes, which only wake up when polled, extends battery life for sensors. Frequency hopping--the ability to automatically select the clearest among 40 different channels--is another possible advantage. A Bluetooth upgrade, to be released in the first half of 2017, is expected to double the speed and multiply the range by four.
A drawback of Bluetooth, however, is that the core specification does not support mesh networking. However, a number of companies have developed proprietary methods for using Bluetooth with mesh networks. And the Bluetooth Special Interest Group (SIG) is working on incorporating a standardized method for that.
Load Shedding
One important way that a lighting control system can contribute to improving energy supply is through load shedding. Indeed, utilities have a problem: although they can plan for average demand, there are unpredictable spikes. This might occur, for example, on an unusually hot summer day. The utility must therefore provide enough capacity to handle the peak loads even if they rarely occur.
One method of satisfying a spurt in demand is with “spinning reserve,” which is the ability of a power plant already connected to the power system to increase its output. The other option is non-spinning reserve — increasing output by bringing fast-start auxiliary generators on line or importing power from other systems.
However, if the demand spike is especially high, the response can generate instability on the grid. In any event, dealing with demand spikes is expensive for utilities. So, in addition to charging commercial customers for total consumption, utilities often add a surcharge for peak demand, typically the highest load for any 15-minute period during the billing cycle.
One solution may be to reduce spikes in demand. Networked LEDs can play a role in that, especially in control systems for microgrids. The grid can send a signal to its customers that their load should be reduced, so-called load shedding.
As Rea says, "you can't dim your copier or your computer but you can dim your lighting" if it’s part of a network.
A study published in the Lighting Research Technology Journal [Lighting Res. Technol., 37, 2 (2005) pp. 133 – 153] investigated the effects of reduced light levels on office workers. The researchers concluded that a reduction in luminance of 15% for paper tasks or 20% for computer tasks was undetectable for half of the participants in their study.
Further, most employees would accept dimming of 30% for paper tasks and 40% for computer tasks even though these levels are detectable. Notably, if the importance of load shedding is explained to employees, the acceptable dimming limits could be increased to as much as 40% for paper tasks and 50% for computer tasks.
Since the power used by LED lamps is approximately proportional to their illumination level, a 50% dimming of lighting would result in a 50% reduction in power demand.


Fiber vs. Copper

Network system designers for projects that range from industrial manufacturing installations to data centers must consider a wide range of variables and anticipated demands when specifying data infrastructure highways.
Current technology provides two primary choices for design mediums.
Traditional twisted-pair copper wire and cabling
Fiber optic systems using both conventional and blown-cable design.
Industrial data centers are growing in size and complexity, and current estimates indicate 90% of active equipment will be replaced within five years. An efficient modular infrastructure, therefore, is required to adapt to fast-moving business requirements, increasing port and cable densities, and rapid deployment needs.
Infrastructure decisions made at the design stage often have a big impact on future expansions. Initial copper cable material and costs typically are 40-50% less than fiber solutions. As a result, they often are the design of choice whenever budgetary considerations are paramount. Growing demand for fiber, however, has resulted in more favorable pricing for new fiber optic system backbone installations.
Upfront costs must be weighed against long-term considerations of performance, future deployment costs to support business changes, space limitations, and modular flexibility for expanding bandwidth capabilities.
he Roads are Different
Twisted-pair copper wires are used to transmit electrical currents and drive data primarily through the proven media of Category 6 (CAT6) cabling. Cost of copper cabling per the IEEE 10 Gig Ethernet standard are currently projected at 40% of 10 Gig fiber networks for linkages of less than 100 meters. For many small- to mid-sized industrial projects, this can be an attractive approach. The copper cable industry is also pursuing CAT8 twisted-pair products to double the capacity of CAT6.
Drawbacks compared to fiber include greater power demands for high-speed data transmission via electrons rather than optical photons. Electrons moving at high-speed must overcome greater resistance and require more power to process signals. In addition, more expensive and complex environmental and cooling systems are needed to manage the heat byproduct of a copper-based data center.
Fiber optic cable consists of fiber strand bundles of optically pure glass tubes. These can be fabricated in a variety of configurations and transmission specifications. Typical fiber optic cables are built and rated for indoor or outdoor applications. They can be obtained in fiber ribbon counts up to 3,456 in a cable diameter less than 1.5 inches. Other common sizes include 12-, 24-, 48-, and 72-count bundles. These usually are encased in a polyethylene foam jacket.
Digital information is carried through fiber via photons in a precise pathway. As such, data can move at higher speeds and over greater distances versus copper. This is due to minimal signal loss during data transmission. As a result, fiber optic solutions often have a strong upside for designs encompassing a large geographic area.
Fiber is lighter than copper and has a smaller installation footprint. Large projects for corporate data centers or industrial feeds are typically carried out in both supported (cable trays, aerial hooks, and so on) as well as direct-feed configurations. For direct feed, specialty tubing can be used for buried or impact-resistant applications.
It’s All About the Bandwidth
From a manufacturing viewpoint, technology-driven advances have transformed the world into a grand scale “information system,” which some believe will trigger a new industrial revolution.
Physical objects now have embedded sensors and actuators that can be linked through wired and wireless networks via Internet protocols. Bandwidth demands will likely continue to grow as business and industrial enterprises move toward decentralized controls for production and supply chain logistics.
The past decade has seen strides in wireless technology for industrial networks. The reason is clear: With wireless communications, no direct cabling is required. However, growing bandwidth constraints with wireless (particularly video over wireless) have generated increasing demands for high-speed data highways.
In bandwidth performance, optical fiber has demonstrated a convincing upside versus copper cable due to the extremely high frequency ranges the optical glass is able to carry. With copper wire, signal strength diminishes at higher frequencies. CAT6 twisted-pair copper can be optimized for data rates of 500MHz over a distance of 100 meters. In comparison, multi-mode optical fiber would have a bandwidth in excess of 1000 MHz over the same distance.
In terms of Ethernet infrastructure standards, copper is a compatible choice up to 1.0 Gigabit applications. Optical fiber, however, has demonstrated transfer rates of 10 gigabits/second (Gbps). Movement toward 40-100 Gbps is a next threshold standard. A copper solution for a 10-Gigabit Ethernet is not a commercial reality at this time, making these applications only suitable for fiber.
A Case for Blown Optical Fiber?
Blown fiber technology originated with British Telecom in 1982 and provided a method to upgrade capacity or to change fiber type without prohibitive infrastructure costs.
In a ground-up blown fiber installation, “highways” of tube cable are installed in buried, riser, or plenum rated tubing constructions. The tube cables can then be populated with single- or multi-mode fibers in various configurations up to 48 fiber optic tubes per bundle. Precise geometric patterns manufactured into the fiber bundle jackets provide aerodynamic properties to the bundles. They then can be “blown” into the pathways via a customized nitrogen-driven air motor system.
Manual installations that previously required large crews and disruptions to existing operations can now be accomplished with a single pair of skilled workers trained in blowing techniques. Installation lengths of 4,000-6,000 feet are typical, with end-end completion times averaging 6-8 man-hours.
Blown optical fiber can be cost-effective in next-generation backbone upgrades within a given facility or in expansions to new locations. It can be blown in at a rate of 200 feet/minute. It also may be blown out at the same rate for additional or upgraded capacity.
Cost and complexity of future expansions are minimized without the labor-intensive steps of laying new conduit and disrupting operational areas. For the visionary designer, the up-front costs of installing a high-speed blown fiber network can more than pay for themselves by lowering the cost of future upgrades.
Copper or Fiber Infrastructure?
Arriving at the “perfect” solution for a cabling infrastructure is a difficult task that requires careful analysis of a large number of variables. These can include bandwidth demands, immunity to electrical interference, density of the connectivity space, flexibility/speed of reconfiguration, cost of electronics, device interface considerations, and budgetary constraints.
Copper’s initial low investment cost often makes it a viable alternative for compact and single-building data center needs where data transfer rates of less than 2 Gbps will meet current and future needs. Copper can be considered for relatively short data transmission distances within a defined building footprint where environmental challenges, temperature fluctuations, and electromagnetic interference have a minimum impact on signal integrity.
The initial higher costs of fiber-optic solutions can be offset with its much greater bandwidth properties and virtually error-free transmissions over large distances. Optical data center solutions provide generally simple and easy installations up to 75% faster than pulled copper cables. Fiber can also deliver considerable performance, time, and cost savings both for initial and for future infrastructure needs.