www.techaneesh.blogspot.com.

Friday, March 29, 2013

Radio-frequency identification (RFID): A Complete Overview

RFID Technology: A Complete Overview

RFID technology allows non-contact transfer of information (much like the familiar barcode), making it effective in manufacturing and other hostile environments where barcode labels could not survive. Take a look at how it works and what are its pros and cons

Radio-frequency identification (RFID) technology involves the use of electromagnetic or electrostatic coupling in the RF portion of the electromagnetic spectrum to uniquely identify an object, animal or person. It has established itself in a wide range of markets including livestock identification and automated vehicle identification because of its ability to track moving objects. The technology has also become a primary component of automated data collection, identification and analysis systems worldwide.

Architecture and working
An RFID system consists of three components: transceiver (often combined into the reader), some sort of data processing equipment, such as a computer, and a transponder (the tag). A typical RFID system is shown in Fig. 1.

RFID tag.
RFID tag, usually known as transponder, acts as a transmitter as well as a receiver in the RFID system. The three basic components of the RFID tag are an antenna, a microchip (memory) and the encapsulating material.

Fig. 1: RFID architecture and working

In a typical system, tags are attached to objects. Each tag has a certain amount of internal memory (EEPROM) in which it stores information about the object, such as its unique ID (serial) number, or in some cases more details including manufacture date and product composition.

When these tags pass through a field generated by a reader, they transmit this information back to the reader, thereby identifying the object. The antenna uses radio frequency waves to transmit a signal that activates the transponder. When activated, the tag transmits data back to the antenna. The data is used to notify a programmable logic controller that an action should be taken. The action could be as simple as raising an access gate or as complicated as interfacing with a database to carry out a monetary transaction.

Low-frequency (30-500 kHz) RFID systems have a short transmission range (generally less than 1.8 metres). High-frequency (850-950 MHz and 2.4-2.5 GHz) RFID systems offer a longer transmission range (more than 27 metres). In general, the higher the frequency, the more expensive the system. RFID is sometimes called dedicated short-range communication.

Fig. 2: How passive tags are defined
There are two types of RFID tags: read-only tag and read-write tag. In the read-only tag, the microchip or memory is written only once, during manufacturing process. The information, along with the serial number on the read-only tag, can never be changed. In the read-write tag, only the serial number is written during manufacturing process. The remaining blocks can be re-written by the user.

Until recently, the focus of RFID technology was mainly on tags and readers which were being used in systems involving relatively low volumes of data. This is now changing as RFID in the supply chain is expected to generate huge volumes of data, which will have to be filtered and routed to the backend IT systems. To solve this problem, companies have developed special software packages called savants, which act as buffers between the RFID front-end and the IT backend. Savants are equivalent to middleware in the IT industry.

RFID reader. RFID reader is the device used to transmit to and receive information from the RFID tag. It is also referred to as an ‘interrogator.’ It includes sensors that read the RFID tags in the vicinity.

The reader sends a request for information to the tag. The tag responds with the respective information, which the reader then forwards to the data processing device. The tag and reader communicate with one another over a radio frequency channel. In some systems, the link between the reader and the computer is wireless.

Supporting infrastructure. The supporting infrastructure includes related software and hardware required for RFID systems. The software manages the interaction between the RFID reader and the RFID tags.

Communication protocol
The communication process between the reader and tag is managed and controlled by one of several protocols, such as the ISO 15693 and ISO 18000-3 for HF, or the ISO 18000-6 and EPC 18000-6 for UHF. Basically, when the reader is switched on, it starts emitting a signal at the selected frequency band (typically 860-915 MHz for UHF or 13.56 MHz for HF). Any corresponding tag in the vicinity of the reader will detect the signal and use the energy from it to wake up and supply operating power to its internal circuits. Once the tag has encoded the signal as valid, it replies to the reader, and indicates its presence by modulating (affecting) the reader field.

Anti-collision. If many tags are present, they will all reply at the same time. At the reader end, this is seen as signal collision and an indication of multiple tags. The reader manages this problem by using an anti-collision algorithm that allows tags to be sorted and individually selected. There are many different types of algorithms (binary tree, aloha, etc) which are defined as part of the protocol standards.

The number of tags that can be identified depends on the frequency and protocol used, and can typically range from 50 tags/s for HF to 200 tags/s for UHF. Once a tag is selected, the reader is able to perform a number of operations such as reading the tags. This process continues under control of the anti-collision algorithm until all the tags have been selected.

Inductively coupled RFID tags. These original tags were complex systems of metal coils, antennae and glass. Inductively coupled RFID tags were powered by a magnetic field generated by the RFID reader. Electrical current has an electrical component and a magnetic component, i.e., it is electromagnetic. The name ‘inductively coupled’ comes from the magnetic field inducted by a current in the wire.
Pros and Cons of RFID Technology
Pros
1. RFID tags are rugged and robust and can work in harsh temperatures and environment. The RFID system works at a remarkably high speed, even in adverse conditions.
2. RFID tags are available in different shapes, sizes, types and materials. The information on the read-only tag cannot be altered or duplicated. Read-write tags can be used repeatedly. The RFID tags are always read without any error.
3. Direct physical contact between the tags and the reader is not required. RF technology is used for communication.
4. Multiple RFID tags can be read at the same time. The RFID tags can be read in a bulk of ten to 100 tags at a time. Reading of the tags is automatic and involves no labour.
5. RFID systems can identify and track unique items, unlike the bar code system which identifies only the manufacturer and the product type.
6. The entire RFID system is very reliable, which allows the use of RFID tags for security purpose.
7. The storage capacity of the RFID tags is greater than of any other automatic identification and tracking system.
Cons
1. The RFID system is costly compared to other automatic identification systems. The cost can increase further if the RFID system is designed for a specific application.
2. Size and weight of the tags is more as compared to barcode system. The electronic components like antenna, memory and other parts of the tags make them bulky.
3. Although the tags work in harsh environments, the signals from certain types of tags get affected when they come in close contact with certain metals or liquids. Reading such tags becomes difficult and sometimes the data read is erroneous.
4. There is no way in which damaged tags can be tracked and replaced with tags that are intact.
5. Although the tags do not require line-of-sight communication, they can be read within a specified range only.





Capacitively coupled tags. These tags were created to lower the technology’s cost. These were disposable tags that could be applied to less expensive merchandise and made as universal as bar codes. Capacitively coupled tags used conductive carbon ink instead of metal coils to transmit data. The ink was printed on paper labels and scanned by readers.

Motorola’s BiStatix RFID tags. These were the frontrunners in this technology. They used a silicon chip that was only 3 millimetres wide and stored 96 bits of information. This technology didn’t catch on with retailers, and BiStatix was shut down in 2001.

Inductively coupled and capacitively coupled RFID tags aren’t used as commonly today because these are expensive and bulky. Newer innovations in the RFID industry include active, semi-active and passive RFID tags. These tags can store up to 2 kilobytes of data and are composed of a micro-chip, antenna and, in the case of active and semi-passive tags, a battery. The tag’s components are enclosed within plastic, silicon or sometimes glass. Table I gives the performance overview of the different-frequency passive tags.


Active and passive tags
The first basic choice when considering a tag is between passive, semi-passive and active. Passive tags can be read from a distance of up to 4-5 metres using the UHF frequency band, whilst the other types of tags (semi-passive and active) can achieve much greater communication distances of up to 100 metres for semi-passive and several kilometres for active. This large difference in communication performance can be explained by the following:
1. Passive tags use the reader field as a source of energy for the chip and for communication from and to the reader. The available power from the reader field not only reduces very rapidly with distance but is also controlled by strict regulations, resulting in a limited communication distance of 4-5 metres when using the UHF frequency band (860-930 MHz).
2. Semi-passive (battery-assisted backscatter) tags have built-in batteries and therefore do not require energy from the reader field to power the chip. This allows them to function with much lower signal power levels, resulting in greater distances of up to 100 metres. Distance is limited mainly because the tag does not have an integrated transmitter, and still has to use the reader field to communicate back to the reader.
3. Active tags are battery-powered devices that have an active transmitter onboard. Unlike passive tags, these generate RF energy and apply it to the antenna. This autonomy from the reader means that they can communicate from distances of over several kilometres.
HF and UHF are best suited to the supply chain. UHF, due to its sup
erior read range, will become the dominant frequency. LF and microwave will not be used in certain cases.

Tag Ics
RFID tag ICs are designed and manufactured using some of the most advanced and smallest-geometry silicon processes available. The result is impressive, when you consider that the size of a UHF tag chip is around 0.3 mm2.

In terms of computational power, RFID tags are quite dumb, containing only basic logic and state machines capable of decoding simple instructions. This does not mean that they are simple to design. In fact, very real challenges exist such as achieving very low power consumption, managing noisy RF signals and keeping within strict emission regulations.


Other important circuits allow the chip to transfer power from the reader signal field, and convert it via a rectifier into a supply voltage. The chip clock is also normally extracted from the reader signal.

Fig. 3: HF (13.56MHz) tag example

Fig. 4: UHF (860-930MHz) tag example
The amount of data stored on a tag depends on the chip specifications, and can range from just simple identifier numbers of around 96 bits to more information about the product containing up to 32 kbits. However, greater data capacity and storage (memory size) leads to larger chip sizes and hence more expensive tags.

In 1999, the AUTO-ID Center (now EPC Global) based at the Massachusetts Institute of Technology in the US, together with a number of leading companies, developed the idea of a unique electronic identifier code called the electronic product code (EPC). The EPC is similar in concept to the universal product code used in barcodes today.


Fig. 5: Basic tag IC architecture
Having just a simple code of up to 256 bits would lead to smaller chip size and hence lower tag costs, which is recognised as the key factor for widespread adoption of RFID in the supply chain. Tags that store just an ID number are often called licence plate tags.

Tag classes
One of the main ways of categorising RFID tags is by their capability to read and write data. This leads to the following four classes:

Class 0 (read-only, factory-programmed). These are the simplest type of tags, where the data, which is usually a simple ID number (EPC), is written only once into the tag during manufacture. The memory is then disabled from any further updates. Class 0 is also used to define a category of tags called electronic article surveillance or anti-theft devices, which have no ID and announce their presence only when passing through an antenna field.

Class 1 (write-once read-only, factory- or user-programmed). In this case, the tag is manufactured with no data written into the memory. Data can then either be written by the tag manufacturer or by the user one time. Following this no further writes are allowed and the tag can only be read. Tags of this type usually act as simple identifiers.

Class 2 (read-write). These are the most flexible type of tags, where users have access to read and write data into the tag’s memory. They are typically used as data loggers and therefore contain larger memory space than what is needed for just a simple ID number.

Class 3 (read-write with on-board sensors). These tags contain on-board sensors for recording parameters like temperature, pressure and motion by writing into the tag’s memory. As sensor readings must be taken in the absence of a reader, the tags are either semi-passive or active.

Class 4 (read-write with integrated transmitters). These are like miniature radio devices which can communicate with other tags and devices without the presence of a reader. This means that they are completely active with their own battery power source.

Selecting a tag
Choosing the right tag for a particular RFID application is an important consideration, and should take into ac-count many of the factors listed below:
1. Size and form factor—where does the tag have to fit?
2. How close will the tags be to each other?
3. Durability—does the tag need to have a strong outer protection against regular wear and tear?
4. Is the tag reusable?
5. Resistance to harsh (corrosive, steamy, etc) environments
6. Polarisation—the tag’s orientation with respect to the reader field
7. Exposure to different temperature ranges
8. Communication distance
9. Influence of materials such as metals and liquids
10. Environment (electrical noise, other radio devices and equipment)
11. Operating frequency (LF, HF or UHF)
12. Supported communication standards and protocols (ISO, EPC)
13. Regional (US, European and Asian) regulations
14. Will the tag need to store more than just an ID number like an EPC?
15. Anti-collision—how many tags in the field must be detected at the same time and how quickly?
16. How fast will the tags move through the reader field?
17. Reader support—which reader products are able to read the tag?
18. Does the tag need to have security?


Fig. 6: Two different ways of energy and information transfer between the reader and tag
How tags communicate

In order to receive energy and communicate with a reader, passive tags use one of the two following methods shown in Fig. 6. These are near-field, which employs inductive coupling of the tag to the magnetic field circulating around the reader antenna (like a transformer), and far-field, which uses techniques similar to radar (backscatter reflection) by coupling with the electric field.

The near-field is generally used by RFID systems operating in the LF and HF bands, and the far field is used for longer-read-range UHF and microwave RFID systems. The theoretical boundary between the two fields depends on the frequency used, and is in fact directly proportional to l/2p, where  ’l’ is wavelength. This gives, for example, around 3.5 metres for a HF system and 5 cm for UHF, both of which are further reduced when other factors are taken into account.

Fig. 7: HF tag orientation with different antenna configurations
Tag orientation (polarisation)
How tags are placed with respect to the polarisation of the reader’s field can have a significant impact on the communication distance for both HF and UHF tags. This can result in a 50 per cent reduction of the operating range and, in the case of the tag being displaced by 90° (see Fig. 7), inability to read the tag.

The optimal orientation of HF tags is when the two antenna coils (reader and tag) are parallel to each other as shown in Fig. 7. UHF tags are even more sensitive to polarisation due to the directional nature of the dipole fields. The problem of polarisation can be overcome to a large extent by different techniques implemented either at the reader or tag as shown in Table IV.

The future
Developments in RFID technology continue to yield larger memory capacities, wider reading ranges and faster processing. However, it is highly unlikely that the technology will ultimately replace barcode. Even with the inevitable reduction in raw materials coupled with economies of scale, the integrated circuit in an RF tag will never be as cost-effective as a barcode label. RFID, though, will continue to grow in its established niches where barcode or other optical technologies are ineffective, such as in the chemical container and livestock industries.

Wednesday, March 27, 2013

Nanotechnology--Commercial Applications

Nanotechnology--Commercial Applications

The scope of application of nanotechnology is very wide. Advanced research organisations around the world have identified five key market areas where nanotechnology can bring market-changing competitiveness as detailed here.

1. High-brightness LEDs
 
Currently, high-brightness light-emitting diodes (LEDs) are the most promising light source, as these offer better efficiency, longer life and higher mechanical strength. Because high-brightness LEDs have a typical heat flux of over 100W/cm2, conventional packaging is not suitable. High junction temperature in LEDs substantially degrades the efficiency, colour quality, reliability and life of the solidstate lighting devices. Failure analysis always shows that the failure is not on the LED itself but on different package components due to high operating temperatures.

Nanomaterials can play a big role in heat dissipation in LEDs. As shown in Fig. 3, a die-attach adhesive not only attaches LED die to the substrate but also provides thermal and electrical conduction between the LED die and the package. Hence the heat-conducting ability of die-attach adhesive is very critical to the performance.

Fig. 3: Improved LED
The existing epoxy adhesive has very low shelf temperature at -40 to -20°C, which is needed to lower the reactivity. This makes it very inconvenient and energy consuming to transport and store the material. Second, most of the epoxy resins need to be cured at a high temperature of above 130°C for a long time, which may damage the LED. The high curing temperature wastes energy and reduces mass production efficiency.

Research work in this area has led to the development of a die-attach adhesive with high thermal conductivity of about 25W/MK, low curing temperature of about 85°C and high shelf temperature of -10°C using nanomaterials and processing technology. This was made possible by using metal nanowires as filler additives in the adhesive matrix. These filler additives can fill the gaps between the original commercial fillers to form a continuous and multichannel heat transmission pathway to enhance the thermal conductivity as shown in Fig. 4.

A series of low-cost environment-friendly (heavy-metal-free) luminescent quantum dots as the down-converting phosphors for high-performance LED devices has been developed. These luminescent, spectrally tunable quantum dots are coupled to a single-crystal LED chip to serve as a colour converter.

Quantum dots are quantised electronic structures in which electrons are confined with respect to motion in all three dimensions. Their properties resemble those of atoms in an electromagnetic cage, rendering possible fascinating novel devices. Typically, the size of a quantum dot is 10 nm and it may contain ten or more atoms. The size generally varies from 4 to 20 nm at room temperature depending on the type of the material. Such drastic reduction in size leads to carrier localisation in all three dimensions.

Using proper combination of quantum dots with different sizes and compositions and with improved package design, it is possible to manufacture high-quality, non-toxic white solidstate lighting devices with high colour rendering index (CRI>85) and high efficacy (>60 lm/W). Quantum-dot technology can significantly save energy and improve the colour quality of conventional LEDs.

There are two main classes of white LEDs: multi- and single-chip devices. Multichip white LEDs (RGB LEDs) constituted by red, green and blue emitting chips show three emission bands and possess a good CRI, good efficiency and well tunable colour. However, the efficiencies of red, green and blue LEDs change over time at different rates. Hence after a high-quality white light is produced initially, the quality of the white light degrades over time. Also, the cost of multichip white LEDs is high.

In comparison, single-chip white LEDs are low-cost and offer a high luminous efficiency, making them suitable for general-purpose lighting in the future. These LEDs are also called down-conversion LEDs, in which the blue light is downconverted into the light of a longer wavelength and the combination of blue and yellow lights is interpreted as white light by the human eye.

Fig. 4: Die-attach adhesive matrix

Fig. 5: Quantum-dot high-brightness LED
However, commercially available white LEDs emit a harsh and bluish cold-white light with poor colour rendering properties. This limits their wide-scale use in indoor illumination. When these white LEDs are used as the backlight of an LCD, the pale white light from blue and yellow hues cannot express the natural colours of objects faithfully in general circumstances.

Quantum dots can be easily optimised in size to a white light spectrum. The small size of quantum dots, which is much smaller than the wavelength of visible light, can also eliminate all light scattering and the associated optical losses. Thus highly luminescent and spectrally tunable quantum dots can be coupled to bright and efficient single-crystal LEDs. This approach utilises quantum dots as the downconverting medium, which efficiently absorbs blue light emitted by, for example, highly efficient gallium nitride LED and converts it into another wavelength of choice.

One of the most serious obstacles to using quantum-dot LEDs in consumer products is the toxicity of cadmium- and lead-based quantum dots. This problem has been addressed by developing high-luminescence, non-toxic quantum dots using InP/ZnS core shell.

First, nanoparticles of quantum dots using InP/ZnS with tunable size and colour are made. It is then followed by a wet chemical process to form a uniform inorganic protective shell on the quantum dots to achieve stability of the quantum dots and to improve the reliability of the polymer encapsulation of the LEDs. Then the encapsulated quantum dots with shells are further modified to make them well dispersed in the polymer matrix. The luminescent, spectrally-tunable quantum dots are then coupled with single-crystal LED chips to serve as a colour converter. Fig. 5 shows the scheme of a non-toxic quantum-dot LED.

2. Building materials
In a typical office building, air-conditioning is the largest energy guzzler, which consumes about 48 per cent of the total energy. So the demand for better thermal insulation in the building enclosures has dramatically increased over the last few years.

To improve thermal insulation of composite wall panels, it is not practical to increase the wall thickness significantly. Research in this area is therefore focused on reducing the thermal conductivity of the wall materials without compromising on the other beneficial properties.

Introduction of porosity is the most straightforward way of improving thermal insulation. But porosity reduces structural strength and increases moisture uptake, which leads to corrosion. Nanomaterials and nanotechnology can address this issue. One solution is to use thermally insulating coating of aerogels and titanium dioxide (TiO2) nanoparticles on building materials like conventional glass, tiles and aluminium plates.

The high porosity of aerogels makes these an excellent thermal insulator. However, when radiation becomes a significant heat transfer mechanism at high temperature, the insulative performance of aerogels decreases since these are highly transparent in the 3-8µm wavelength region. With the addition of opacifiers, e.g., TiO2, this infrared transparency of aerogels can be reduced or even eliminated, ensuring that the insulative coating is suitable for a wide range of climates.

Another merit of nano TiO2 particles is their photo-induced superhydrophilicity property that can provide a self-cleaning function. This property makes these particles one of the most promising contenders for insulative coating in outdoor applications.

Another solution to reduce electricity consumption is use of high-performance cementitious materials for construction of the external wall with enhanced thermal insulation. For a given wall thickness, the thermal conductivity could be reduced if solid concrete is replaced with foamed concrete.

To meet the structural requirement, the foamed concrete should have compression strength equal to or greater than 30 MPa. For reducing the electrical consumption of the air-conditioning system, it should have thermal insulation property of less than 1W/mK. To achieve these technical requirements, lightweight sand and a foaming agent are used to make lightweight concrete with low thermal conductivity.

To maximise the strength, Pozzolans along with thermal curing is provided. Further, to prevent corrosion of steel reinforcement, a fibre-reinforced cementitious composite composition is applied to the wall surface. It acts as the barrier to water and chemical penetration. It has been found that by suitably adopting this technology electric power consumption of air-conditioning can be reduced by half.
3. Carbon-nanotube GHz amplifier
Carbon nanotube field-effect transistors (CNFETs) have been used recently to build a GHz amplifier. CNFETs are very attractive for future RF applications such as amplifiers, mixers and switches. Carbon nanotubes and graphene materials have superior electrical, mechanical strength and thermal properties, thanks to their one-dimensional transport and extremely strong carbon-carbon bonds leading to large mean free paths, high current densities and low thermal noise. Researchers are investigating ways of exploiting these characteristics to develop better performing electronic devices.

Ballistic transport of electrons through semiconducting carbon nanotubes has led to the development of CNFETs—one of the key building blocks of any electronic application. The one-dimensional tube structure allows quantum-capacitance limited operation, resulting in low input and feedback capacitance. It also significantly reduces the scattering probability, thus promising high-frequency performance with low figures. The structure also causes the drain current to be linearly dependent on the applied gate voltage, promising uniquely linear devices.

CNFETs are thermally robust, which simplifies the heat management. The current conventional semiconductor devices used in RF front-ends can be designed with high linearity but at the expense of reducing the operating efficiency to about 5 per cent. CNFETs recently developed deliver similar linearity but at 70 per cent less dissipated power. The system has deep impact on the battery life and reduction in heat dissipation.

Fig. 6: Cross-section of a CNFET 
Typical construction steps are as follows: In a CNFET, thousands of carbon nanotubes are laid out between the drain and the source, providing a low resistance channel as shown in Fig. 6. The current flow through the channel is controlled by a top gate, separated from the tube array by a high-K dielectric. The base substrate is Si, which serves as the mechanical platform. A thick SiO2 is grown over it to minimise the device parasitics for high-performance operation. On this layer, a suitable catalyst is deposited to enhance growth and directionality. Then the carbon nanotubes are grown in a CVD furnace. Source and drain fingers and pads are then defined by the use of standard photolithography. Then a gate oxide is grown, on top of which the gate is patterned.

Source-to-drain distance, or channel width, is 0.8 µm. Gate length is about 0.45 µm and total gate periphery is 800 µm. The amplifier based on CNFET is typically biased at VGS=–0.5V and VDS=2.5V. Gate and drain DC bias are applied through choke. About 12.5 dB of gain can be obtained from a single stage centered at 1.3 GHz as shown in Fig. 7.

Fig. 7: CNFET amplifier gain 
4. Carbon-nanotube pressure sensor
 
There has been much interest over the last couple of decades in exploiting the outstanding electrical, mechanical and optical properties of carbon nanotubes to fabricate micro-electromechanical system (MEMS) devices. In 2010, a carbon nanotube forest-based pressure sensor was developed that had a nearly symmetrical response for both positive and negative gauge pressures, using a suspended diaphragm entirely covered with CNT forest.

Fig. 8: CNT pressure sensor
Achieving reliable devices based on individual CNTs with high repeatability is still very challenging, because of the structural variation between individual nanotubes. This problem can be addressed by using CNT forests—a macroscopic network of virtually self-aligned CNTs which have anisotropic electromechanical properties that can be used in many MEMS applications.

CNT forests provide a very large surface area and exhibit piezoresistivity when a lateral stress is applied. This property has been utilised to create a very sensitive and accurate physical sensor for pressure measurement. The sensor is fabricated by a deflectable, 8µm thick parylene—a biocompatible polymer membrane—and hence is of particular interest for medical applications.

The membrane is suspended by a silicon frame. The sensitivities to positive and negative gauge pressures are found to be comparable in magnitude with the average values of -986 and +816 ppm/kPa, respectively.

As shown in Fig. 8, multiwalled CNT forest is supported on a 5×5mm2 membrane of parylene-C of 8µm thickness. The two opposing ends of the forest are connected with the 130nm thick Mo metal pads on the substrate that are used to measure the resistance of the forest.

Fig. 9: Operation of CNT pressure sensor 
When pressure is applied to the membrane, a strain is generated in it as well as in the forest, leading to a change in the resistance of the forest due to its piezoresistive effect. The source of piezoresistive effect of the CNT forest is related to the change in distances between individual CNTs due to lateral strains applied to the forest.

As shown in Fig. 9, a positive pressure that deflects the membrane downwards can widen the CNT separation around the centre region of the membrane and increase the resistance in the region. However, it can also narrow the separation around the two border regions between the membrane and the substrate, thereby decreasing resistance in these regions. The opposite behaviour can be caused by a negative pressure.

The polarity may be determined by the collective piezoresistive effects across the forest including the two border regions. However, these border regions are relatively large and play the dominant role in the overall piezoresistive effect.

Fig. 10: Microstructure of CNT

Fig. 11: Forest resistance against gauge pressure
Fig. 10 shows the individual CNT and needle-like microstructure of the CNT forest. The fabrication process of the CNT forest starts with formation of a Si-nitride mask on both sides of a Si substrate, followed by patterning of a square window in the mask on the backside of the substrate. A molybdenum film is deposited with an electron beam evaporation technique to form the metal pads on the front side of the substrate. A catalyst layer made of 2nm Fe on 10nm Al2O3 is deposited in the region where the CNT forest will be grown.

Chemical vapour deposition technique is used to grow a forest of 400-600µm height using C2H4 as the carbon source, followed by deposition of 8µm thick parylene-C film binding the tops of individual CNTs. The backside window is made by dry etching of the silicon substrate. Fig. 11 shows the responses of the sensor for positive and negative gauge pressures.


5. Photovoltaic cells
 
In order to alleviate global warming by reducing greenhouse gas emission from combustion of fossil fuel, various countries are committed to curb 40-45 per cent carbon emission and increase the share of renewable energy to about 15 per cent of the total energy generated by 2020. This presents a big market opportunity for photovoltaic (PV) manufacturers.

More than 80 per cent of PV modules installed worldwide are made of crystalline silicon (c-Si). One of the main drawbacks of c-Si modules is their high cost due to use of expensive raw material, complicated manufacturing process and sophisticated equipment. Hence thin-film (TF) PV modules are rapidly gaining market shares because of their lower cost per watt, especially due to much lower use of the critical light-absorbing materials.

Thin-film PV cells are made from either amorphous silicon (a-Si) or copper indium gallium diselenide (CIGS) and organic photovoltaic. The research focuses on the key to fabricating low-cost thin-film solar cells, which includes controlling the microstructure, morphology and composition of their thin films and their interfaces.

Of all the thin-film solar cell technologies, CIGS is the most promising due to its higher conversion efficiency, lower toxicity and lower cost of manufacturing than a-Si and Cd-Te cells. Reliable control of CIGS thin-film morphology and its composition has been developed, which increases its efficiency. The process involves use of selenium powder by evaporation instead of using highly toxic H2Se gas as the source of selenium. Large-area purge flow control evaporation chamber has been designed in conjunction with advanced rapid thermal process and precise chemical bath deposition of barrier layer for large area uniformity control. All these steps yielded a better CIGS solar cell.

Although the conversion efficiency of organic photovoltaic cell is still low, it has a high potential to bring affordable solar energy to different markets as it involves simple processing and equipment. High-purity phenyl-C60-butyric acid (PCBM) and low-cost novel donor materials are being evaluated. Process optimisation concentrates on controlling the microstructure morphology and composition for maximising the conversion efficiency. Low-cost manufacturing is possible by adopting roll-to-roll manufacturing line.

Light management significantly increases the efficiency of PV cells, especially in thin-film solar cells that are only a few micrometres thick. One of the approaches is the use of a photonic crystal for selective spectrum tailored for different bandgaps of solar cells. Photonic crystal is a macroscopic, periodic dielectric structure that possesses spectral gaps (stop bands) for electromagnetic waves, in analogy with the energy bands and gaps in regular semiconductors. The advantages of using photonic crystals for light redirection compared to organic dyed solar cells are lower cost, higher efficiency and better directional performance.

As shown in Fig. 1, a photonic crystal film of about 10µm thickness can be coated on a 3.2mm thick glass substrate which can laterally redirect light in different spectra. Different combinations of spectra can be made by suitably designing the photonic crystal. As the coating is thin enough, a transparent solar panel can be realised with a proper design where solar cells can be put on edges of the glass for electricity generation.

In a conventional Si PV cell, a lot of light energy is lost due to reflection. Conventional multistack dielectric antireflection coating may be used, which is costly and shows only limited bandwidth.

A maskless process requiring just a few steps has been developed to provide antireflectivity on silicon, which has low reflectivity over a wide acceptance angle. At a wide bandwidth of 400-800 nm, weighted reflectivity on silicon can be substantially reduced from 41 to 3 per cent at an acceptable angle of 15° and from 43 to 15 per cent at an acceptance angle of 60°. Fig. 2 shows such surface morphology on silicon with nanopillars providing low reflectivity over a wide acceptance angle.


Fig. 1: PV cell using photonic crystal

Fig. 2: PV cell with nanopillar 

Monday, March 25, 2013

Thumbs down to the nuclear energy-nuclear power means profits, power and politics.

Thumbs down to the nuclear energy..........nuclear power means profits, power and politics.

 

Although for decades the arguments against nuclear power have been –and still are- strong and valid, there is an increasing group of nuclear companies supported by scientists and politicians, who say we need nuclear power to fight climate change and to be independent. They also claim that all the problems associated with nuclear power are almost solved or solved.

Why do they say this? Because for them, nuclear power means profits, power and politics.
Nuclear power produces nuclear waste that is highly radioactive. What exactly is radioactivity and what is radiation? What radiation can do to living organisms was clearly illustrated when the Russian former KGB spy Alexander Litvinenko was poisoned with a tiny dose of polonium-210. It killed him in a few days. Nuclear radiation occurs when unstable atoms decay.



It disrupts the functioning of the cells that make up our bodies. High levels of radiation kill cells, resulting in radiation burns, sickness and death.

Lower levels of radiation cause mutations, which can result in cancer and inheritable genetic damage. These effects are unpredictable.
Like with smoking, we know there is a direct relation but it occurs at random. If a large number of people are exposed to radiation, for example as happened after the Chernobyl accident, we know that some people will get cancer and some women will give birth to children with genetic defects, but we cannot predict who will be affected. Also the effects can be delayed, with cancers or birth defects occurring many years after exposure to radiation.
High levels of radiation are very dangerous. The nuclear industry believes that high-level waste can be stored relatively harmlessly. Most other sources claim that the waste is still radioactive (far above free release limits) after 240,000 years.

The nuclear industry proposes long-term waste storage sites e.g. in bunkers or in deep rock formations, but has failed to realise such a long-term disposal site. It is impossible to guarantee the isolation of waste for hundreds of thousands of years. Once the waste is buried, there is no longer a possibility to check for and repair leakages. Leakages are simply a matter of time, i.e. the containers definitely will leak sometime in the future, releasing the radioactivity they contain.

Aboveground storage cannot be considered safe either. Although there is a possibility to control and repair the waste containers, mankind will be responsible for its management ‘forever’. Containers have to be replaced and the storage facility must be protected against war, terrorism and other potential dangers.

A study published in January 2007 in Nature casts new doubt on nuclear waste storage safety. Synthetic material that scientists had hoped would contain nuclear waste for thousands of years may not be as safe and durable as previously thought. It showed that this material (zircon) is susceptible to degradation faster than expected and may not be able to contain the waste until it becomes safe. The findings are particularly important for long-lived isotopes such as plutonium, uranium and neptunium.


The following case shows some of the problems related to underground storage. The true but unbelievable storage of the Asse salt mine started when until 1978, 124,000 barrels of low and intermediate radioactive waste (including 24 kg of plutonium) were stored in a former salt mine in Asse, Lower Saxony, Germany. The waste was supposed to be stored ‘forever’ in dry salt.

Recent research showed that since 1988, salt lye water has been flowing freely into the mineshaft daily (11.5 m3 per day in November 2006), causing the waste drums to rust. In total, 52 million litres have entered over 18 years. The former salt mine consists of open spaces (for transport and storage), and is now subsiding, with risk of collapse. An October 2006 survey showed that the combination of rust and radioactive waste could produce inflammable or explosive gases, which build up pressure, pushing up radioactive material, possibly into the groundwater system. In reaction to the first signs of leakages, the German government started to stabilise the mine in 1995 by filling it with 2.5 million m3 of salt (in 15 years).

According to the Ministry of Science & Technology these costs amount to several hundreds of millions of Euros to be paid by the government.


KOODANKULAM NUCLEAR POWER PLANT HAVE DELETERIOUS EFFECTS ON SRI LANKA

Koodankulam Nuclear power

Koodankulam Nuclear power plant that is to be built in South India within another 45 days will bring bad effects to Sri Lanka with its radiation.

The nuclear power plant that was built in Koodankulam will be active within another 45 days. This is located 200 km away from Sri Lanka and in case if there will be leakage of radiation that can directly affect Sri Lanka. The best example is Fukushima and Chirbinol in Russia
which affected thousand lives.

Clean Energy?

Nuclear energy is claimed to be the answer to our climate problems since it is clean-burning. However, a life-cycle analysis, which takes into account the energy-intensive processes of mining and enriching the uranium ore, constructing and dismantling the nuclear plant, and disposing the hazardous waste, shows that nuclear is definitely not carbon-free. In fact, emissions from a nuclear plant in the U.S. can range from 16-55 grams of CO2 per kilowatt-hour over the lifetime of the plant.
A large uranium enrichment and nuclear power plant in France
A large uranium enrichment and nuclear power plant in France. Pretty ugly isn't it?
Compared to wind (11-37 gCO2/kWh) and biomass (29-62gCO2/kWh)12, nuclear is no cleaner than renewables. Furthermore, nuclear power will only become more polluting in the future since increased nuclear production will decrease the supply of high-grade uranium and much more energy is required to enrich uranium at lower grades.

At the same time, the International Atomic Energy Agency has already acknowledged that current uranium resources are not sufficient to meet increased demand in the future.
A report from The Oxford Research Group predicts that in 45 to 70 years, nuclear energy will emit more carbon dioxide than gas-fired electricity.

Running Out of Time

Nuclear power plants are a slow technology that cannot address global warming in a quick enough time period. The nature of climate change demands that we begin reducing greenhouse gas emissions now and continue doing so over the next few decades. NASA scientist James Hansen says that we have a 10-year window before global warming reaches its tipping point and major ecological and societal damage becomes unavoidable.16 Even if a nuclear energy project was given government approval today, it would take about 10 years for the plant to start delivering electricity.
Before that time, emissions would increase from construction, speeding up the process of global warming.

Real Solutions Are Waiting On the Shelf

Renewable energy sourcesNuclear power might be a reasonable option to solving climate change if it were the only alternative to coal and natural gas. Fortunately, cleaner, cheaper, quicker solutions to global warming are already available. We can also take advantage of huge potential energy savings through efficiency. That doesn’t mean being forced to do without; it simply means going further with each kilowatt of electricity

Energy efficiency is not only the cheapest and easiest way to reduce our carbon dioxide emissions; it will actually save consumers money. A report from the McKinsey Global Institute stated that the installation of highly efficient light bulbs and appliances nation-wide could displace the equivalent output of more than 60 large nuclear plants.

Clearly, there’s room for improvement.
The primary argument made for the necessity of increased energy consumption is to fuel economic growth. However, we can still achieve much economic growth without building a new power plant except to replace ones that retire. In fact, since 1990, about half of our increased energy demand worldwide has been met with increased efficiency, not new generation.
The promise of renewable energy options continues to improve as well; modern-day wind turbines are already less expensive than nuclear power and, as the technology continues to improve, costs are dropping even lower.

Rural Systems-Solar energy is free, but what does it really cost?



Solar energy is free, but what does it really cost?

Solar panels powering a rural cultural and drama centre
 
 
Solar energy is free, but it’s not cheap best sums up the major hurdle for the solar industry. There are no technical obstacles per se to developing solar energy systems, even at the utility megaWatt level (e.g., 14 MW utility scale PV system at Nellis AFB or a 64-MW CSP system in Nevada); however, at such large scales a high initial capital investment is required.




Over the past three decades, a significant reduction of the cost of solar products has occurred, without including environmental benefits; yet, solar power is still considered a relatively expensive technology. For small- and medium-scale uses, in some applications, such as passive solar design for homes, the initial cost of a home designed to use solar power is essentially no more than that of a regular home, and operating costs are much less.

The only difference is that the solar-energy home works with the Sun throughout the year and needs smaller mechanical systems for cooling and heating, while poorly designed homes fight the Sun and are iceboxes in the winter and ovens in the summer.
Industrial society and modern agriculture were founded on fossil fuels (coal, oil, and gas). The world will make a gradual shift throughout the twenty-first century from burning fuels to tech-nologies that harness clean energy sources such as sun and wind.
As energy demand increases as developing countries modernize and fossil fuel supply constricts, increased fuel prices will force alternatives to be introduced. The cost of technologically driven approaches for clean energy will continue to fall and become more competitive.
Eventually, clean energy technologies will be the inexpensive solution.
 
As the full effect and impact of environmental externalities such as global warming become apparent, society will demand cleaner energy technologies and policies that favor development of a clean-energy industrial base. By the end of the twenty-first century, clean-energy sources will dominate the landscape.
This will not be an easy or cheap transition for society, but it is necessary and inevitable.
 

Rural Systems

Rural Pakistani village
Rural Pakistani village
 
 
Already, solar energy is cost effective for many urban and rural applications. Solar hot-water systems are very competitive, with typical paybacks from 5–7 years as compared to electric hot-water heaters (depending on the local solar resource).

PV systems are already cost competitive for sites that are remote from the electric grid, although they are also popular for on-grid applications as environmental “elitists” try to demonstrate that they are “green.

”However, one should beware of “green-washing” as people and companies install grid-tied PV systems without making efforts to install energy-efficient equipment first. Far more can be achieved through energy conservation than solar energy usage alone for reducing carbon emissions.
 
The decision to use a solar energy system over conventional technologies depends on the economic, energy security, and environmental benefits expected. Solar energy systems have a relatively high initial cost; however, they do not require fuel and often require little maintenance. Due to these characteristics, the long-term life cycle costs of a solar energy system should be understood to determine whether such a system is economically viable.

Historically, traditional business entities have always couched their concerns in terms of economics. They often claim that a clean environment is uneconomical or that renewable energy is too expensive. They want to continue their operations as in the past because, sometimes, they fear that if they have to install new equipment, they cannot compete in the global market and will have to reduce employment, jobs will go overseas, rates must increase, etc.

The different types of economics to consider are pecuniary, social, and physical. Pecuniary is what everybody thinks of as economics: dollars. Social economics are those borne by everybody and many businesses want the general public to pay for their environmental costs. If environmental problems affect human health today or in the future, who pays? Physical economics is the energy cost and the efficiency of the process. There are fundamental limitations in nature due to physical laws. In the end, the environment and future generations always suffer the corollary of paying now or probably paying more in the future.

An economical analysis should be looking at life cycle costs, rather than at just the ordinary way of doing business and low initial costs. Life cycle costs refer to all costs over the lifetime of the system. Also, incentives and penalties for the energy entities should be accounted for.
What each entity wants is to earn subsidies for itself and penalties for its competitors. Penalties come in the form of taxes and fines; incentives may come in the form of tax breaks, unaccounted social and environmental costs, and also what the government (society) could pay for research and development.


 

Saturday, March 23, 2013

Hypotheses for Bat Attraction to Wind Turbines-

Why Bats Are Insanely Attracted To Wind Turbines?

Why Bats Are Insanely Attracted To Wind Turbines? 

Bat Kills at Wind Turbines-

 The cause for this non-collision mortality is believed to be a type of decompression known as barotrauma,

Bat Mortality from Collisions and Barotrauma

 

Bats that fly too close to wind turbines are killed by either direct impact or from major air pressure changes around the spinning rotors.

While bats clearly are killed by direct collision with turbine blades, up to 50 percent of the dead bats around wind turbines are found with no visible sign of injury.

The cause for this non-collision mortality is believed to be a type of decompression known as barotrauma, resulting from rapid air pressure reduction near moving turbine blades.
Barotrauma kills bats near wind turbines by causing severe tissue damage to their lungs, which are large and pliable, thereby overly expanding when exposed to a sudden drop in pressure.

By contrast, barotrauma does not affect birds because they have compact, rigid lungs that do not excessively expand.

Bat Atraction to Wind Turbines

Many species of bats appear to be significantly attracted to wind turbines for reasons that are still poorly understood.

Here we’re gonna try to summarizes the more plausible scientific hypotheses that have been advanced to date. By contrast, birds are not normally attracted to wind turbines, and simply collide with them by accident.

The Eastern Red Bat Lasiurus borealisis typical of the migratory, tree-roosting bat

The Eastern Red Bat Lasiurus borealisis typical of the migratory, tree-roosting bat species that are frequent casualties at some wind farms in North America.

9 Hypotheses for Bat Attraction to Wind Turbines

Various scientific hypotheses have been proposed as to why bats are seemingly attracted to and/or fail to detect wind turbines
The more plausible hypotheses include the following:

1. Auditory Attraction

Bats may be attracted to the audible “swishing” sound produced by wind turbines. Museum collectors seeking bat specimens have used long poles that were swung back and forth to attract bats and then knock them to the ground for collection.
It is not known if these bats were attracted to the audible “swishing” sound, the movement of the pole, or both factors.

2. Electromagnetic Field Disorientation

Wind turbines produce complex electromagnetic fields, which may cause bats in the general vicinity to become disoriented and continue flying close to the turbines.

3. Insect Attraction

As flying insects may be attracted to wind turbines, perhaps due to their prominence in the landscape, white color, lighting sources, or heat emitted from the nacelles, bats would be attracted to concentrations of prey.

4. Heat Attraction

Bats may be attracted to the heat produced by the nacelles of wind turbines because they are seeking warm roosting sites.

5. Roost Attraction

Wind turbines may attract bats because they are perceived as potential roosting sites.

6. Lek Mating

Migratory tree bats may be attracted to wind turbines because they are the highest structures in the landscape along migratory routes, possibly thereby serving as ren-dezvous points for mating.

7. Linear Corridor

Wind farms constructed along forested ridge-tops create clearings with linear landscapes that may be attractive to bats.

8. Forest Edge Effect

The clearings around wind turbines and access roads located within forested areas create forest edges. At forest edges, insect activity might well be higher, along with the ability of bats to capture the insects in flight.
Resident bats as well as migrants making stopovers may be similarly attracted to these areas to feed, thus increasing their expo-sure to turbines and thus mortality from collision or barotrauma.

9. Thermal Inversion

Thermal inversions create dense fog in cool valleys, thus concentrating both bats and their insect prey on ridge-tops.

Bat Species Most Significantly Affected

The bat killed by wind turbine blades

The bat killed by wind turbine blades
In North America, migratory bat species have been found dead at wind farms much more frequently than the resident (non-migratory) species, even in areas where the resident species are more common throughout the summer.

Eleven of the 45 species of bats that occur in North America north of Mexico have been found dead at wind farms, but most studies report that the mortality is heavily skewed towards migratory, tree-roosting species such as Hoary Bat Lasiurus cinereus, Eastern Red Bat Lasiurus borea-lis, and Silver-haired Bat Lasionycteris noctivagans.

While these three species are not listed as threatened or endangered under the U.S. Endangered Species Act, they are classified as of Special Management Concern at the provincial level in Canada. Although the globally endangered Indiana Bat Myotis sodalishas not yet been found dead at wind farms, potential new wind farms within this species’ remaining strongholds could possibly threaten it.

In Europe, 19 of the 38 species of bats found within the European Union have been reported killed by wind turbines.
 
Although migratory species are among the most numerous casualties, resident bats are also killed in substantial numbers, particularly in forested areas.
Turbine-related bat mortality has been found in every European country in which bat monitoring has been done, except for Poland where no dead bats were found during monitoring at two sites. The highest numbers of bat fatalities have been found in Germany and France, which is almost certainly due to the more extensive monitoring carried out in those countries.

Bat Detection Technology Demonstration for Wind Turbines

Bats are worth billions to the agriculture industry due to their natural control of pests. Unfortunately, wind power poses a risk to bats due to the potential for them to be struck by spinning turbine blades.

EPRI is working with We Energies to demonstrate a specialized technology that uses ultrasonic microphones to detect the presence of bats. If the microphones pick up the high-pitched squeaks and clicks bats make, the turbines will automatically shut down and restart when the bats are out of range.

The project is focused on reducing bat mortality at wind farms while avoiding long-term curtailments and maximizing electricity production.


In Latin America, 19 bat species were represented among the 123 individual bats found dead under wind turbines in 2007-2008 at the La Venta II project in southern Mexico. In 2009, 20 different bat species were involved (INECOL 2009).
Thirteen of these species are insectivores, while two feed mainly on nectar, and two on fruit.
 
The most commonly killed species, Davy’s Naked-backed Bat Pteronotus davyi, is thought to be resident in the area, although some other frequently killed species at La Venta II are considered to be migratory.

Interestingly, despite the enormous concentrations of mi-gratory birds that pass over or through the La Venta II wind farm (over 1 million per year), monitoring data from INECOL show that a larger number of bats are being killed there than birds.

        

 

Super Capacitors – Different Than Others

Super Capacitors – Different Than Others 

 
Super Capacitors - Different Then Others (part 2)
 

Equivalent circuit

Super capacitors can be illustrated similarly to conventional film, ceramic or aluminum electrolytic capacitors.
First order model of a super capacitor
Figure 3 - First order model of a super capacitor

This equivalent circuit is only a simplified or first order model of a super capacitor. In actuality super capacitors exhibit a non ideal behavior due to the porous materials used to make the electrodes. This causes super capacitors to exhibit behavior more closely to transmission lines than capacitors.
Below is a more accurate illustration of the equivalent circuit for a super capacitor:
Model of a super capacitor
Figure 4 - Model of a super capacitor


How to measure the capacitance?

There are a couple of ways used to measure the capacitance of super capacitors:
  1. Charge method
  2. Charging and discharging method.

Charge Method

Measurement is performed using a charge method using the following formula:
C = t / R
t = 0.632 x V where Vo is the applied voltage.
Charge and discharge methods
Figure 5 - Charge and discharge methods
 

Discharge Method

This method is similar to the charging method except the capacitance is calculated during the discharge cycle instead of the charging cycle.
Discharge time for constant current discharge:
t= Cx ( V0 – V1 ) / I
Discharge time for constant resistance discharge:
t= CR ln ( V1 / V0 )
Where:
t – discharge time,
V0 – initial voltage
V1 – ending voltage
I – current

Measure Capacitance

Super capacitors have such large capacitance values that standard measuring equipment cannot be used to measure the capacity of these capacitors.
Capacitance is measured per the following method:
  1. Charge capacitor for 30 minutes at rated voltage.
  2. Discharge capacitor through a constant current load.
  3. Discharge rate to be 1mA/F.
  4. Measure voltage drop between V1 to V2.
  5. Measure time for capacitor to discharge from V1 to V2.
  6. Calculate the capacitance using the following equation:
C = I * ( T2 – T1 )
V1 – V2
Where:
V= 0.7 Vr, V2 = 0.3 Vr (Vr – rated voltage of capacitor)

Capacitor types

We group capacitors into three family types and the most basic is the electrostatic capacitor, with a dry separator.
This capacitor has a very low capacitance and is used to filter signals and tune radio frequencies.
Capacitor types
Capacitor types

The size ranges from a few pico-farad (pf) to low microfarad (uF).
The next member is the electrolytic capacitor, which is used for:
  1. Power filtering,
  2. Buffering and
  3. Coupling.
Rated in microfarads (μF), this capacitor has several thousand times the storage capacity of the electrostatic capacitor and uses a moist separator.

How a Capacitor Works – by Dr. Oliver Winn

The third type is the supercapacitor, rated in farads, which is again thousands of times higher than the electrolytic capacitor. The supercapacitor is ideal for energy storage that undergoes frequent charge and discharge cycles at high current and short duration.
Farad is a unit of capacitance named after the English physicist Michael Faraday. One farad stores one coulomb of electrical charge when applying one volt. One microfarad is one million times smaller than a farad, and one pico-farad is again one million times smaller than the microfarad.
Engineers at General Electric first experimented with the electric doublelayer capacitor, which led to the development of an early type of supercapacitor in 1957. There were no known commercial applications then.
In 1966, Standard Oil rediscovered the effect of the double-layer capacitor by accident while working on experimental fuel cell designs. The company did not commercialize the invention but licensed it to NEC, which in 1978 marketed the technology as “supercapacitor” for computer memory backup.
It was not until the 1990s that advances in materials and manufacturing methods led to improved performance and lower cost.
The modern supercapacitor is not a battery per se but crosses the boundary into battery technology by using special electrodes and electrolyte. Several types of electrodes have been tried and we focus on the double-layer capacitor (DLC) concept. It is carbon-based, has an organic electrolyte that is easy to manufacture and is the most common system in use today.
All capacitors have voltage limits. While the electrostatic capacitor can be made to withstand high volts, the supercapacitor is confined to 2.5–2.7V. Voltages of 2.8V and higher are possible but they would reduce the service life.
To achieve higher voltages, several supercapacitors are connected in series.
 
This has disadvantages.
Serial connection reduces the total capacitance, and strings of more than three capacitors require voltage balancing to prevent any cell from going into over-voltage. This is similar to the protection circuit in lithium-ion batteries.
The specific energy of the supercapacitor is low and ranges from 1 to 30Wh/kg. Although high compared to a regular capacitor, 30Wh/kg is one-fifth that of a consumer Li-ion battery. The discharge curve is another disadvantage. Whereas the electrochemical battery delivers a steady voltage in the usable power band, the voltage of the supercapacitor decreases on a linear scale from full to zero voltage.

This reduces the usable power spectrum and much of the stored energy is left behind.
Consider the following example.
Take a 6V power source that is allowed to discharge to 4.5V before the equipment cuts off. With the linear discharge, the supercapacitor reaches this voltage threshold within the first quarter of the cycle and the remaining three-quarters of the energy reserve become unusable.
A DC-to-DC converter could utilize some of the residual energy, but this would add to the cost and introduce a 10 to 15 percent energy loss. A battery with a flat discharge curve, on the other hand, would deliver 90 to 95 percent of its energy reserve before reaching the voltage threshold.

Table 1 below compares the supercapacitor with a typical Li-ion:


FunctionSupercapacitorLithium-ion (general)
Charge time1–10 seconds10–60 minutes
Cycle life1 million or 30,000h500 and higher
Cell voltage2.3 to 2.75V3.6 to 3.7V
Specific energy (Wh/kg)5 (typical)100–200
Specific power (W/kg)Up to 10,0001,000 to 3,000
Cost per Wh$20 (typical)$0.50-$1.00 (large system)
Service life (in vehicle)10 to 15 years5 to 10 years
Charge temperature–40 to 65°C (–40 to 149°F)0 to 45°C (32°to 113°F)
Discharge temperature–40 to 65°C (–40 to 149°F)–20 to 60°C (–4 to 140°F)

Rather than operating as a stand-alone energy storage device, supercapacitors work well as low-maintenance memory backup to bridge short power interruptions. Supercapacitors have also made critical inroads into electric powertrains.
The virtue of ultra-rapid charging and delivery of high current on demand makes the supercapacitor an ideal candidate as a peak-load enhancer for hybrid vehicles, as well as fuel cell applications.

The charge time of a supercapacitor is about 10 seconds.

The charge characteristic is similar to an electrochemical battery and the charge current is, to a large extent, limited by the charger. The initial charge can be made very fast, and the topping charge will take extra time.
Provision must be made to limit the initial current inrush when charging an empty supercapacitor.
The supercapacitor cannot go into overcharge and does not require full-charge detection; the current simply stops flowing when the capacitor is full. The supercapacitor can be charged and discharged virtually an unlimited number of times. Unlike the electrochemical battery, which has a defined cycle life, there is little wear and tear by cycling a supercapacitor.

Nor does age affect the device, as it would a battery.

Under normal conditions, a supercapacitor fades from the original 100 percent capacity to 80 percent in 10 years. Applying higher voltages than specified shortens the life. The supercapacitor functions well at hot and cold temperatures.
The self-discharge of a supercapacitor is substantially higher than that of an electrostatic capacitor and somewhat higher than the electrochemical battery. The organic electrolyte contributes to this.

The stored energy of a supercapacitor decreases from 100 to 50 percent in 30 to 40 days.
 
A nickel-based battery self-discharges 10 to 15 percent per month. Li-ion discharges only five percent per month.
Supercapacitors are expensive in terms of cost per watt. Some design engineers argue that the money for the supercapacitor would better be spent on a larger battery.
We need to realize that the supercapacitor and chemical battery are not in competition; rather they are different products serving unique applications.

Advantages of the supercapacitors

  1. Cell voltage determined by the circuit application, not limited by the cell chemistry.
  2. Very high cell voltages possible (but there is a trade-off with capacity)
  3. High power available.
  4. High power density.
  5. Simple charging methods. No special charging or voltage detection circuits required.
  6. Very fast charge and discharge. Can be charged and discharged in seconds. Much faster than batteries.
  7. No chemical actions.
  8. Can not be overcharged.
  9. Long cycle life of more than 500,000 cycles at 100% DOD.
  10. Long calendar life 10 to 20 years
  11. Virtually unlimited cycle life – not subject to the wear and aging experienced by the electrochemical battery.
  12. Low impedance – enhances pulse current handling by paralleling with an electrochemical battery.
  13. Rapid charging – low-impedance supercapacitors charge in seconds.
  14. Simple charge methods – voltage-limiting circuit compensates for selfdischarge; no full-charge detection circuit needed.
  15. Cost-effective energy storage – lower energy density is compensated by a very high cycle count.
  16. Almost zero maintenance and long life, with little degradation over hundreds of thousands of cycles.
    While most commercially available rechargeable batteries can be charged 200 to 1000 times, ultracapacitors can be charged and discharged hundreds of thousands of times with no damage.However, in reality, they can be charged and discharged virtually unlimited number of times, and will last for the entire lifetime of most devices and applications they are used in, thus making them environmentally friendly.
    Battery lifetime can be optimised by only charging under favorable conditions, at an ideal rate and, for some chemistries, as infrequently as possible.
    Ultracapacitors can help in conjunction with batteries by acting as a charge conditioner, storing energy from other sources for load balancing purposes and then using any excess energy to charge the batteries at a suitable time.
  17. Increased safety since they can handle short circuit and reverse polarity. Also, there is no fire and explosion hazard.
  18. Improved environmental safety since there is no corrosive electrolyte and toxicity of materials used is low.
    Rechargeable batteries on the other hand wear out typically over a few years, and their highly reactive chemical electrolytes present a disposal and safety hazard.
  19. Rugged since they have Epoxy Resin Sealed Case which is non corrosive.
 
 

Surge Protection of Electronic Equipment-Transient Voltage Surge Suppressors

Surge Protection of Electronic Equipment

Surge Protection of Electronic Equipment 

 

Introduction

Generally, power circuits have components that have large thermal capacities, which make it impossible for them to attain very high temperatures quickly except during very large or long disturbances. This requires correspondingly large surge energies. Also, the materials that constitute the insulation of these components can operate at temperatures as high as 200 ºC at least for short periods.
Electronic circuits, on the other hand, use components that operate at very small voltage and power levels. Even small magnitude surge currents or transient voltages are enough to cause high temperatures and voltage breakdowns.

Transient Voltage Surge Suppressors

Transient Voltage Surge Suppressor (TVSS) is a device that every data center or mission critical facility should have.
Why should every data center have one and what does it do you ask?
The purpose of a TVSS is to eliminate or reduce damage to data processing equipment and other critical equipment by limiting transient surge voltages and currents (surges) on electrical circuits.
.
These transients or surges may come from inside a facility or may be injected into a facility from outside.

What is a transient?

A transient surge is a short blast or pulse of high energy that can either come in its natural form such as lightning or produced by other equipment.
Transients caused by other equipment are usually caused by the discharge of stored energy in inductive components. Some examples are electrical motors, such as those used in elevators heating, air conditioning, refrigeration or other inductive loads. Two other sources are arc welders and furnace igniters. These transients are capable of causing significant damage to equipment and electronics.
The transient causes damage to a device when the transient voltage exceeds the weakest exposed component’s ability to withstand that voltage. Transients normally flow into equipment via electrical conductors, but other paths are common. These paths include: telephone lines, data-com line, measurement and control lines, DC power buses and neutral and ground lines.
To protect against these surges designers recommend the installation of a TVSS devices that connects to all points of potential voltage threat and limit this voltage to a level below the equipment “withstand” voltage. The TVSS device absorbs or diverts all the energy present in the surge and clamping or holding the “let through” over voltage down to a level safe for exposed circuitry.
TVSS protection is typically applied at several points throughout of facility. These locations include the service entrance point, distribution panels, branch panels and the individual circuit.
As you can see a TVSS device is important to mission critical electrical system and its benefits are great. A TVSS is a low cost protection device that will help to reduce downtime or production losses. It helps to extend lighting lamp and ballast life expectancy. The TVSS will help in reducing motor stress and overheating and is a constant protection of data processing and digital equipment.
If you mission critical facility does not already have TVSS devices installed we highly recommend it. If you are not sure if your system has them installed we suggest asking your engineer or electrician to verify. It is a small price for additional peace of min.

Main causes of transient over voltages

1. Lightning Strike

a) A lightning Strike can have a destructive or disturbing effect on electrical installations situated up to several miles away from the actual point of the strike.

b) During a storm, underground cables can transmit the effect of a lightning strike to electrical equipment installed inside buildings.

c) A lightning protection device (such as a lightning rod or a Faraday cage) installed on a building to protect it against the risk of a direct strike (fire) can increase the risk of damage to electrical equipment connected to the main supply near or inside the building.
Direct and indirect lightning strike on overhead line

Left: Direct lightning strike on overhead line; Right: Indirect lightning strike on ground

Lightning strike on lightning rod

Lightning strike on lightning rod

The lightning protection device diverts the high strike current to earth, considerably raising the potential of the ground close to the building on which it is installed.
This causes overvoltages on the electrical equipment directly via the earth terminals and induced via the underground supply cables.
Top

2. Switching operation on the power distribution system

The switching of transformers, motors or inductances in general, sudden variation of load, disconnection of circuit breaker or cut outs lead to over voltages that penetrate the user’s building.
Significantly, the closer the building is to a generating station or substation, the higher the over voltages may be.
Medium voltage disturbance transmitted to low voltage side of transformer

Medium voltage disturbance transmitted to low voltage side of transformer

It is also necessary to take into account mutual induction effects between the high voltage power line and aerial sections of the low voltages lines as well as direct contact between lines of different voltages caused by accidental breaking of cables.
Top

3. Parasitic interferences

These are freak interferences with indifferent amplitudes and frequencies that are re-injected into the electrical supply by the user himself or his environment.
Top

4. Disturbances generated by the user

Disturbance generated by the user
Disturbance generated by the user

These interferences have little energy but their short duration, their steep wave front and their peak value can have harmful effects on the proper functioning of the sensitive equipment causing either disruption or complete destruction.



This is so because of the very small electrical clearances that are involved in PCBs and ICs (often in microns) and the very poor temperature withstanding ability of many semiconducting materials, which form the core of these components.
 
As such, a higher degree of surge protection is called for if these devices have to operate safely in the normal electrical system environment.
Thus comes the concept of surge protection zones (SPZs).
According to this concept, an entire facility can be divided into zones, each with a higher level of protection and nested within one another.
As we move up the SPZ scale, the surges become smaller in magnitude, and protection better.
  • Zone 0: This is the uncontrolled zone of the external world with surge protection adequate for high-voltage power transmission and main distribution equipment.
  • Zone 1: Controlled environment that adequately protects the electrical equipment found in a normal building distribution system.
  • Zone 2: This zone has protection catering to electronic equipment of the more rugged variety (power electronic equipment or control devices of discrete type).
  • Zone 3: This zone houses the most sensitive electronic equipment, and protection of highest possible order isprovided (includes computer CPUs, distributed control systems, devices with ICs, etc.).

The SPZ principle is illustrated in Figure 1.
Zoned protection approach
Figure 1 - Zoned protection approach

We call this the zoned protection approach and we see these various zones with the appropriate reduction in the order of magnitude of the surge current, as we go down further and further into the zones, into the facility itself.
Notice that in the uncontrolled environment outside of our building, we would consider the amplitude of say, 1000 A.

As we move into the first level of controlled environment, called zone 1, we would get a reduction by a factor of 10 to possibly 100 A of surge capability. As we move into a more specific location, zone 2, perhaps a computer room or a room where various sensitive hardware exist, we find another reduction by a factor of 10.

Finally, within the equipment itself, we may find another reduction by a factor of 10, the effect of this surge being basically one ampere at the device itself. The IEEE C62.41 indicates a similar but slightly differing approach to protection zones.
The idea of the zone protection approach is to utilize the inductive capacity of the facility, namely the wiring, to help attenuate the surge current magnitude, as we go further and further away from the service entrance to the facility.
 
The transition between zones 0 and 1 is further elaborated in Figure 2. Here we have a detailed picture of the entrance into the building where the telecommunications, data communications and the power supply wires all enter from the outside to the first protected zone.

Notice that the surge protection device (SPD) is basically stripping any transient phenomena on any of these metallic wires, referencing all of this to the common service entrance earth even as it is attached to the metallic water piping system.
The transition from zone 0 to zone 1
Figure 2 - The transition from zone 0 to zone 1

Similarly, the protection for zone 2 at the transition point from zone 1 is shown in Figure 3.
Here as we address the discrete level between the first level of controlled zone 1 and perhaps the plug-in device taking it into the zone 2 location, we can see surge protection devices are available that handle the telecommunications, data and different types of physical plug connections for each, including both the RJ type of telephone plug as well as coaxial wiring.

The transition from zone 1 to zone 2
Figure 3 - The transition from zone 1 to zone 2

This is a common design error where there are two points of entry and therefore two earthing points are established for the AC power and telecommunication circuits.
The use of the TVSS devices at each point is highly beneficial in controlling the line-to-line and line-to-earth surge conditions at each point of entry, but the arrangement cannot perform this task between points of entry.
 
This is of paramount importance since the victim equipment is connected between the two points. Hence, a common-mode surge current will be driven through the victim equipment between the two circuits despite the presence of the much-needed TVSS.
The minimal result of the above is corruption of the data and maximally, there may be fire and shock hazard involved at the equipment.

No matter what kind of TVSS is used in the above arrangement nor how many and what kind of additional individual, dedicated earthing wires, etc. are used, the stated problem will remain much as discussed above. Wires all possess self-inductance and because of −e = L dI/dT conditions cannot equalize potential across themselves under normal impulse /surge conditions.

Such wires may self-resonate in quarter-waves and odd-multiples thereof, and this is also harmful.This also applies to metal pipes, steel beams, etc.
Earthing to these nearby items may be needed to avoid lightning side-flash, however.

 rom the above, it will be clear that the type of surge protection depends on the type of zone and the equipment to be protected. We will further illustrate this by example, as we proceed from the uncontrolled area of zone 0.
Let us begin by talking about what happens when a lightning strike hits an overhead distribution line.

Here in Figure 4, we see the picture of the thunderstorm cloud discharging onto the distribution line and the points ofapplication of a lightning arrestor by the power company at points #1 and #2. We notice that the operating voltage here is 11 000 volts on the primary line and the transformer has a secondary voltage of 380/400 V typically serving the consumer.

We need to understand what is known as traveling wave phenomena. When the lightning strike hits the power line, the powerline’s inherent construction makes it capable to withstand as much as 95 000 V for its insulation system.
Protections in zone 0
Figure 4 - Protections in zone 0

We call this the basic impulse level (BIL).
Most of the 11 000-V construction equipment would have a BIL rating of 95 kV. This says to us that the wire insulation, the cross-arms and all of the other parts, which are nearby to the current-carrying conductors, are able to withstand this high voltage.

Traveling waves and sparks over the lightning arrestor applied on a 11 000-V line might have a spark-over characteristic of approximately 22 000 V. This high level of spark-over protection is to enable the lightning arrestor to wait until the peak of the 11 000-V operating wave shape is exceeded before discharging the energy into the earth.
The peak of the 11 000-V RMS wave would be somewhere in the neighborhood of 15 000 V. As the voltage comes to the 22 000-V level and then stays there as the lightning arrestor performs its discharge, that voltage waveform travels on the power line moving very fast to all points of the line. At places where there is discontinuity to the electric line, such as points #3 or #4 in our chart, the traveling wave will go in at 22 000 V and then will double and start back down the line at 44 000 V.

This type of phenomenon is known as reflection of the traveling wave and it occurs at open parts of the circuit or even the primary of transformers. When the primary of our distribution transformer serving the building achieves 44 000 V, the secondary supplying the building is going to have an over-voltage condition on it.

Thus, points #5 and #6 on our chart require us to think in terms of some type of lightning-protective devices at the secondary of the transformer, the service entrance to the building and then further on into the building such as point #6 for the sensitive equipment to be fully protected in this facility.